- Getting Started
- Breakdown of Architecture Design Methodology
- Coding Standards
- Writing test cases
- Deployment
- Publishing the Design System
To get a local copy up and running follow these simple example steps.
-
Node & NPM:
node >= v16
andnpm >= v8
. Recommended versions for this project arenode v19
andnpm v9
. -
Docker: We use Docker Desktop, but the Docker CLI will also work.
-
Create a Copy of .npmrc: In the project directory root, duplicate
.npmrc.example
, saving the copy as.npmrc
.# root project directory cp .npmrc.example .npmrc
-
Create and Add Your Npm and GitHub Tokens: In order to connect with the Dataverse API, developers will need to install @iqss/dataverse-client-javascript from the GitHub registry by following the steps outlined below. Read more about access tokens on GitHub Docs.
Getting a GitHub Token
- Go to your GitHub Personal Access Tokens settings
- Select
Generate new token (classic)
- Give the token a name and select scope
read:packages
- Copy the generated token and replace the string
YOUR_GITHUB_AUTH_TOKEN
in the previously created.npmrc
file. Now, you should be able to install the Dataverse JavaScript client using npm.
Afterwards, your .npmrc file should resemble the following:
# .npmrc
legacy-peer-deps=true
# js-dataverse registry
//npm.pkg.github.com/:_authToken=<YOUR_GITHUB_AUTH_TOKEN>
@iqss:registry=https://npm.pkg.github.com/
Note: The .npmrc file is not identical to .npmrc.example, as the latter contains the registry to publish the design system, see Publishing the Design System for more information. To run the project you only need the above configuration.
- Install the project & its dependencies
# root project directory
npm install
Warning
You may see a message about vulnerabilities after running this command.
Please check this announcement from Create React App repository facebook/create-react-app#11174. These vulnerabilities will not be included in the production build since they come from libraries only used during development.
- Build the UI Library, needed for running the application.
# root project directory
cd packages/design-system && npm run build
Running & Building the App:
Run the app in the development mode. Open http://localhost:5173 to view it in your browser.
# root project directory
npm start
The application will actively watch the directory for changes and reload when changes are saved. You may also see any existing linting errors in the console.
# root project directory
# Build the app for production to the `/dist/` directory:
npm run build
# Locally preview the production build:
npm run preview
Storybook:
Runs the Storybook in the development mode.
There are 2 Storybook instances, one for the general Design System and one for the Dataverse Frontend component specifications. Both should be started automatically and available at:
- Dataverse Frontend Storybook: http://localhost:6006/
- Dataverse Design System Storybook: http://localhost:6007/
# $ cd packages/design-system
# `npm run storybook` should automatically open in your default browser
npm run storybook
# $ cd packages/design-system
npm run build-storybook
Note that both Storybook instances are also published to Chromatic as part of the build and merge processes, located at:
A containerized environment, oriented to local development, is available to be run from the repository.
This environment contains a dockerized instance of the Dataverse backend with its dependent services (database, mail server, etc.), as well as a npm development server running the SPA frontend (With code watching).
This environment is intended for locally testing any functionality that requires access to the Dataverse API from the SPA frontend.
There is a Nginx reverse proxy container on top of the frontend and backend containers to avoid CORS issues while testing the application.
Run the Environment
As the script argument, add the name of the Dataverse image tag you want to deploy.
# /dev-env/ directory
# copy the .env.example file to .env
# To test file upload, update the .env file with S3 credentials
$ cp .env.example .env
# Install and run project off latest tagged container image from the develop branch
$ ./run-env.sh unstable
# Alternatively, you can specify a PR tag from https://github.com/orgs/gdcc/packages/container/package/dataverse
# To run this, you need to also change the REGISTRY variable in the .env file to point to the GitHub Container Registry (REGISTRY=ghcr.io)
$ ./run-env.sh <DATAVERSE_IMAGE_TAG>
# Removes the project and its dependencies
$ ./rm-env.sh
Please note that the image tag must be pre-pushed to the Docker registry; otherwise, the script will fail. You can find the existing tags for alpha and unstable versions on DockerHub at @gdcc/dataverse. Images associated with pull requests (PRs) are available in the GitHub Container Registry.
If you are running the script for the first time, it may take a while, since npm has to install all project dependencies.
This can also happen if you added new dependencies to `package.json`, or used the _uninstall_ script to remove current
project files and shut down any running containers.
Once the script has finished, you will be able to access Dataverse via:
- Dataverse SPA Frontend: [http://localhost:8000/spa][dv_app_localhost_spa_url]
- Dataverse JSF Application: [http://localhost:8000][dv_app_localhost_legacy_url]
Note: The Dataverse configbaker takes some time to start the application, so the application will not be accessible until
the bootstrapping is complete.
<br>
**Adding Custom Test Data**
If you want to add test data (collections and datasets) to the Dataverse instance, run the following command:
```bash
# /dev-env/ directory
$ ./add-env-data.sh
Note: The above command uses the dataverse-sample-data repository whose scripts occasionally fail, so some test data may not be added.
The Dataverse SPA (Single Page Application) represents a significant leap forward in the Dataverse project's aim to provide a more dynamic, efficient, and user-friendly interface for data management and sharing. This section of the Developer Guide outlines the key components of the SPA's design architecture, focusing on its modular, domain-driven design, and the technology stack underpinning it.
The SPA architecture is centered around several key goals designed to address the re-architecture challenges:
- API Extension: Enhancing the Dataverse API to serve as the backbone of the platform, facilitating rich, programmatic interactions with data. The API changes are addressed in the main Dataverse repository.
- Modern Frontend Technologies: Transitioning to React and a suite of modern JavaScript tooling, aligning with contemporary web development practices for better performance, scalability, and developer experience.
- Modular and Reusable Components: Creating a library of reusable components and a design system specific to Dataverse's needs, ensuring consistency across the platform and easing the development of new features.
- Community Engagement and Growth: Lowering the barrier to entry for new contributors and enabling the community to play a more active role in the platform's development.
For more information on the motivations behind the SPA re-architecture, see the document Restructuring the UI as a Single Page Application
The foundation of the SPA is the Dataverse API, significantly expanded to support a wide range of functionalities. The API facilitates interactions with datasets, files, collections, users, and permissions, and is designed with future expansion in mind to accommodate evolving data management needs. The code can be found in the Dataverse repository.
The js-dataverse library abstracts the Dataverse APIs functionalities, providing developers with higher-level JavaScript interfaces to interact with the API. This library is crucial for SPA development, offering a simplified, efficient way to build frontend functionalities that interact with Dataverse data. The code can be found in the js-dataverse repository.
A cornerstone of the SPA's UI consistency is the Dataverse Design System, a collection of reusable UI components that adhere to Dataverse's visual and usability standards. This system allows for the rapid development of new features and ensures a cohesive user experience across the platform. You can find the deployed version of the design system at Dataverse Design System Storybook.
The SPA's design is guided by Domain-Driven Design (DDD) principles, focusing on the core concepts of the Dataverse platform. This approach ensures a clean separation of concerns, with dependencies pointing inward to prevent leakage of implementation details.
This layer consists of models, repositories, and use cases, representing the application's core functionalities. Models and interfaces define the structure of entities like Datasets, Files, Collections, and Users, while repositories provide abstract interfaces to external resources. Use cases encapsulate the application logic, employing abstract repositories for their implementation.
dataset/
├── domain/
├── models/
│ ├── Dataset.ts
│ ├── DatasetFormFields.ts
│ ├── DatasetPaginationInfo.ts
│ ├── DatasetItemTypePreview.ts
│ ├── DatasetValidationResponse.ts
│ └── TotalDatasetsCount.ts
└── repositories/
│ └── DatasetRepository.ts
└── useCases/
├── createDataset.ts
├── getDatasetByPersistentId.ts
├── getDatasetPrivateUrlToken.ts
├── getDatasets.ts
├── getTotalDatasetsCount.ts
└── validateDataset.ts
The Infrastructure Layer connects to external data sources, implementing the repositories defined in the Domain Layer. It includes mappers for translating external data into domain objects, facilitating a decoupled architecture that allows for flexible data management strategies.
infrastructure/
├── mappers/
│ ├── JSDatasetMapper.ts
│ ├── JSDatasetPreviewMapper.ts
│ └── JSDatasetVersionMapper.ts
└── repositories/
└── DatasetJSDataverseRepository.ts
The Domain and Infrastructure Layers work together to manage data flow in the SPA, ensuring that business logic is separated from external data sources. This separation allows for easier testing, maintenance, and scalability of the application. The following diagram illustrates the flow of data between these layers for dataset operations:
Here's a breakdown of the architecture components as depicted in the diagram:
-
Dataset Use Case: This is the high-level functional component that encapsulates the business logic related to datasets. It serves as an entry point for any dataset operations and communicates with a dataset repository to fulfill these operations.
-
<> DatasetRepository: This is an abstract interface declaring the methods that must be implemented for dataset interactions. By defining an abstract layer, we decouple the use cases from the concrete implementation, allowing for greater flexibility and easier testing.
-
Dataset[JSDataverse]Repository: This represents the concrete implementation of the
DatasetRepository
. It's where the actual logic for interacting with the data source lives. In this case, the data source is the Dataverse API, and the repository implementation uses thejs-dataverse
library to interact with it. -
js-dataverse (npm package): It provides the functions necessary to communicate with the Dataverse API. It abstracts the HTTP requests into JavaScript functions that return the data in a format that's easy to manage within a JavaScript application.
-
Dataverse API: The ultimate endpoint for data, the Dataverse API is a backend service that manages and serves the dataset information. The API provides endpoints for CRUD operations and more, which
js-dataverse
will call.
-
The
Dataset Use Case
receives a request from the application layer (like a UI component or another service) to perform an operation related to datasets. -
It then uses the
DatasetRepository
interface to interact with the datasets. This interface is implemented specifically for the Dataverse API by theDataset[JSDataverse]Repository
, which translates the abstract methods into concrete actions usingjs-dataverse
. -
js-dataverse
makes the necessary API calls to theDataverse API
. If the API call is successful, the data flows back through the layers to the original caller, or an error is thrown if something goes wrong, like if a dataset is not found.
By adhering to this architecture, the application ensures that the use cases (business logic) are kept separate from the external data sources, making the system more robust, easier to test, and flexible to changes in the data source layer.
The Presentation Layer in our application architecture is where the user interface (UI) logic resides. It's responsible for rendering the user interface, handling user interactions, and managing the state of the UI components.
src/
└── sections/
├── collection/
├── create-dataset/
├── dataset/
├── file/
└── layout/
Let's break down the components of the Presentation Layer using the Dataset section as an example:
The View is represented by the React component (<Dataset/>
), which is responsible for rendering the UI. It is designed
to be as simple as possible, with the sole responsibility of presenting data to the user. It can be divided into smaller
components for better organization and reusability.
The View communicates user actions to the Presenter but does not directly handle any state or business logic.
The Presenter acts as the intermediary between the View and the Domain. In our implementation, this is where the
useDataset
and useFiles
hooks come into play. These hooks act as Presenters that handle the interaction logic and
state management. They retrieve data from the use cases (Domain), handle any necessary transformations or logic, and
pass it to the View.
In the React ecosystem, hooks provide a way to use state and other React features without writing a class. Our custom
hooks (useDataset
and useFiles
) embrace the Presenter’s responsibilities by managing the state and preparing the
data for the View:
-
useDataset
Hook: Manages the state and logic for get-dataset-related operations, interacting with thegetDataset()
use case and updating the View accordingly. -
useFiles
Hook: Similar touseDataset
, it manages the state and logic for get-files-related operations, interacting with thegetFiles()
use case.
Both hooks encapsulate the "Presenter" logic, translating user actions into Use Cases calls and preparing data to be displayed by the View.
Calling the getDataset()
and getFiles()
use cases can be considered part of the Presenter as well. They directly
interact with the Domain to retrieve data, enforce business rules, and then pass that data back to the Presenter hooks,
which in turn update the View.
- User Interactions: User actions are captured by the View.
- Presenter Logic: The Presenter (custom hooks) receives these actions and communicates with the Use Cases to retrieve or update the data.
- Data Processing: The Use Cases interact with the Domain to perform the necessary operations.
- View Updates: The View renders the UI based on the data provided by the Presenter.
Our project leverages a robust stack of modern development tools and frameworks to ensure high-quality application architecture and user experience. Below is a breakdown of our primary technologies:
The design architecture of the Dataverse SPA is not static; it is envisioned to evolve as new technologies emerge and as the community's needs grow. Future directions may include further API extensions, enhancements to the design system, and the incorporation of artificial intelligence and machine learning tools to facilitate data discovery and analysis.
This project adheres to the following coding standards to ensure consistency, readability, and maintainability of the codebase.
- Code should be self-documenting. Choose descriptive names for variables and functions.
- Comment your code when necessary. Comments should explain the 'why' and not the 'what'.
- Keep functions small and focused. A function should ideally do one thing.
- Follow the DRY (Don't Repeat Yourself) principle. Reuse code as much as possible. But don't over-engineer, sometimes it's better to duplicate code than to overcomplicate it.
- Follow the TypeScript Deep Dive Style Guide.
- Use
PascalCase
for class names,camelCase
for variables and functions,UPPERCASE
for constants. - Always specify the type when declaring variables, parameters, and return types.
- Use ES6+ syntax whenever possible.
- Prefer const and let to var for variable declarations.
- Use arrow functions () => {} for anonymous functions.
- Component Design: Prefer functional components with hooks over class components.
- State Management: Use local state (useState, useReducer) where possible and consider context or Redux for global state.
- Event Handlers: Prefix handler names with handle, e.g., handleClick.
- JSX: Keep JSX clean and readable. Split into smaller components if necessary.
Modularization: Store stylesheets near their respective components and import them as modules.
We use ESLint to automatically check and apply the coding standards to our codebase, reducing the manual work to a minimum
To run all checks, you can run the lint
script.
npm run lint
You can also apply coding style fixes automatically.
npm run lint:fix
Launches the prettier formatter. We recommend you to configure your IDE to run prettier on save.
npm run format
We use [pre-commit] library to add pre-commit hooks which automatically check the committed code changes for any coding standard violations.
Use the following commands to ensure your build passes checks for coding standards and coverage:
npm run test:unit
Launches the test runner for the unit tests in the interactive watch mode.
If you prefer to see the tests executing in cypress you can run npm run cy:open-unit
You can check the coverage with npm run test:coverage
npm run test:e2e
Launches the Cypress test runner for the end-to-end tests.
If you prefer to see the tests executing in cypress you can run npm run cy:open-e2e
# root project directory
# Launches the Cypress test runner for the end-to-end [or unit] tests:
npm run test:e2e [test:unit]
# If you prefer to see the tests executing in Cypress you can run:
npm run cy:open-e2e [cy:open-unit]
# See current code coverage
npm run test:coverage
Running specific tests
The project includes @cypress/grep for running specific tests.
Some examples used by the grep library are below for reference:
# root project directory # run only the tests with "auth user" in the title $ npx cypress run --env grep="auth user" # run tests with "hello" or "auth user" in their titles by separating them with ";" character $ npx cypress run --env grep="hello; auth user"To target specific tests, add
--spec
argument# root project directory $ npx cypress run --env grep="loads the restricted files when the user is logged in as owner" \ --spec tests/e2e-integration/e2e/sections/dataset/Dataset.spec.tsxRepeat and burn tests You can run the selected tests multiple times by using the
burn=N
parameter. For example, run all all the tests in the spec Five times using:# root project directory $ npx cypress run --env burn=5 --spec tests/e2e-integration/e2e/sections/dataset/Dataset.spec.tsx
Testing is a crucial part of the SPA development. Our React application employs three main types of tests: Unit tests, Integration tests, and End-to-End (e2e) tests. In this section, we'll describe each type of test and how to implement them.
Unit tests are designed to test individual React components in isolation. In our approach, we focus on testing components from the user's perspective, following the principles of the React Testing Library. This means:
- Test What Users See: Focus on the output of the components, such as text, interactions, and the DOM, rather than internal implementation details like classes or functions.
- Exceptions to the Rule: In complex scenarios or where performance is a critical factor, we might test implementation details to ensure a repository is correctly called, for example. However, these cases are exceptions, and the primary goal remains on user-centric testing.
- Avoid Testing Implementation Details: Avoid testing implementation details like internal state or methods, as these tests are more brittle and less valuable than user-centric tests.
- Mocking: We use mocks to isolate components from their dependencies, ensuring that tests are focused on the component itself and not on its dependencies.
- Tools and Frameworks: We use Cypress Component testing alongside React Testing Library to render components in isolation and test their behavior.
Example of a Unit Test
//tests/component/sections/home/Home.spec.tsx
import { Home } from '../../../../src/sections/home/Home'
import { DatasetRepository } from '../../../../src/dataset/domain/repositories/DatasetRepository'
import { DatasetItemTypePreviewMother } from '../../dataset/domain/models/DatasetItemTypePreviewMother'
const datasetRepository: DatasetRepository = {} as DatasetRepository
const totalDatasetsCount = 10
const datasets = DatasetItemTypePreviewMother.createMany(totalDatasetsCount)
describe('Home page', () => {
beforeEach(() => {
datasetRepository.getAll = cy.stub().resolves(datasets)
datasetRepository.getTotalDatasetsCount = cy.stub().resolves(totalDatasetsCount)
})
it('renders Root title', () => {
cy.customMount(<Home datasetRepository={datasetRepository} />)
cy.findByRole('heading').should('contain.text', 'Root')
})
})
Test the integration of the SPA with external systems, such as APIs, third-party libraries (like js-dataverse), or databases. This ensures that the application communicates correctly with outside resources and services.
- External Integrations: Test the integration of the SPA with external systems, such as APIs, third-party libraries (like js-dataverse), or databases.
- Tools and Frameworks: We use Cypress for integration tests, to test the repository's implementation.
Example of an Integration Test
//tests/e2e-integration/integration/datasets/DatasetJSDataverseRepository.spec.ts
describe('Dataset JSDataverse Repository', () => {
before(() => TestsUtils.setup())
beforeEach(() => {
TestsUtils.login()
})
it('gets the dataset by persistentId', async () => {
const datasetResponse = await DatasetHelper.create()
await datasetRepository.getByPersistentId(datasetResponse.persistentId).then((dataset) => {
if (!dataset) {
throw new Error('Dataset not found')
}
const datasetExpected = datasetData(dataset.persistentId, dataset.version.id)
expect(dataset.license).to.deep.equal(datasetExpected.license)
expect(dataset.metadataBlocks).to.deep.equal(datasetExpected.metadataBlocks)
expect(dataset.summaryFields).to.deep.equal(datasetExpected.summaryFields)
expect(dataset.version).to.deep.equal(datasetExpected.version)
expect(dataset.metadataBlocks[0].fields.publicationDate).not.to.exist
expect(dataset.metadataBlocks[0].fields.citationDate).not.to.exist
expect(dataset.permissions).to.deep.equal(datasetExpected.permissions)
expect(dataset.locks).to.deep.equal(datasetExpected.locks)
expect(dataset.downloadUrls).to.deep.equal(datasetExpected.downloadUrls)
expect(dataset.fileDownloadSizes).to.deep.equal(datasetExpected.fileDownloadSizes)
})
})
})
Wait for no locks
Some integration tests require waiting for no locks to be present in the dataset. This is done using the waitForNoLocks
method from the TestsUtils
class.
it('waits for no locks', async () => {
const datasetResponse = await DatasetHelper.create()
await DatasetHelper.publish(datasetResponse.persistentId)
await TestsUtils.waitForNoLocks(datasetResponse.persistentId)
await datasetRepository.getByPersistentId(datasetResponse.persistentId).then((dataset) => {
if (!dataset) {
throw new Error('Dataset not found')
}
expect(dataset.locks).to.deep.equal([])
})
})
End-to-end tests simulate real user scenarios, covering the complete flow of the application:
- User Workflows: Focus on common user paths, like searching for a file, logging in, or creating a Dataset. Ensure these workflows work from start to finish.
- Avoid Redundancy: Do not replicate tests covered by unit tests. E2E tests should cover broader user experiences. It is important to note that e2e tests are slower and more brittle than unit tests, so we use them sparingly.
- Tools and Frameworks: We use Cypress for e2e tests, to test the application's behavior from the user's perspective. It allows us to simulate user interactions and test the application's behavior in a real-world scenario.
Example of an E2E Test
//tests/e2e-integration/e2e/sections/create-dataset/CreateDatasetForm.spec.tsx
describe('Create Dataset', () => {
before(() => {
TestsUtils.setup()
})
beforeEach(() => {
TestsUtils.login()
})
it('navigates to the new dataset after submitting a valid form', () => {
cy.visit('/spa/datasets/root/create')
cy.findByLabelText(/Title/i).type('Test Dataset Title')
cy.findByLabelText(/Author Name/i).type('Test author name', { force: true })
cy.findByLabelText(/Point of Contact E-mail/i).type('[email protected]')
cy.findByLabelText(/Description Text/i).type('Test description text')
cy.findByLabelText(/Arts and Humanities/i).check()
cy.findByText(/Save Dataset/i).click()
cy.findByRole('heading', { name: 'Test Dataset Title' }).should('exist')
cy.findByText(DatasetLabelValue.DRAFT).should('exist')
cy.findByText(DatasetLabelValue.UNPUBLISHED).should('exist')
})
})
Note: Some end-to-end (e2e) tests are failing in local development environments despite passing in GitHub Actions. This discrepancy appears to be due to variations in machine resources.
We need to investigate and potentially optimize several aspects of our local setup. Check the issue here.
We have a tests
folder in the root of the project, with subfolders for each type of test (component,
integration, e2e). The folders for integration and e2e are together in the e2e-integration
folder. We follow the same
structure that we use for the application.
We use the same naming conventions as the application, with the suffix .spec
.
We use faker to create test data for unit tests. This library allows us to generate realistic and varied test data with minimal effort. We use it to create random strings, numbers, and other values, ensuring that tests cover a wide range of cases without introducing unpredictable failures.
We use the Object Mother pattern to create test data for unit tests. These classes are located in the tests/component
folder.
The primary goal of the Object Mother pattern is to simplify the creation and management of test objects. It serves as a centralized place where test objects are defined, making the testing process more streamlined and less error-prone. In this pattern, we create a class or a set of functions dedicated solely to constructing and configuring objects needed for tests.
Some benefits of this pattern are:
- Consistency: It ensures consistent object setup across different tests, improving test reliability.
- Readability: It makes tests more readable and understandable by hiding complex object creation details.
- Flexibility: It allows us to create test data with different values and configurations.
- Reusability: It allows for the reuse of object configurations, reducing code duplication and saving time
- Maintainability: It centralizes object creation, making tests easier to maintain and update when object structures change.
- Simplicity: It simplifies test setup, allowing testers to focus more on the test logic than on object configuration.
- Controlled Randomness: It allows creating realistic and varied test scenarios while maintaining control over the randomness. This ensures tests cover a wide range of cases without introducing unpredictable failures.
Example of an Object Mother class
//tests/component/dataset/domain/models/DatasetMother.ts
export class DatasetMother {
static create(props?: Partial<Dataset>): Dataset {
return {
id: faker.datatype.uuid(),
title: faker.lorem.words(3),
...props
}
}
}
We use some helper classes to create test data for the integration and e2e tests. These classes are located in the
tests/e2e-integration/shared
folder.
The test data is created using axios, which allows us to make requests to the Dataverse API and create test data for the integration and e2e tests.
Example of a Helper class to create test data
//tests/e2e-integration/shared/datasets/DatasetHelper.ts
export class DatasetHelper extends DataverseApiHelper {
static async create(): Promise<DatasetResponse> {
return this.request < DatasetResponse > (`/dataverses/root/datasets`, 'POST', newDatasetData)
}
}
We organize tests into suites, grouping them by feature or user workflow. This makes it easier to find and run tests.
We aim to keep tests as isolated as possible, avoiding dependencies between tests and ensuring that each test can run independently.
We run tests on every pull request and merge to ensure that the application is always stable and functional. You can
find the CI workflow in the .github/workflows/test.yml
file.
CI checks include:
- Unit Tests: We run all unit tests to ensure that the application's components work as expected.
- Integration Tests: We run integration tests to ensure that the application communicates correctly with external systems.
- E2E Tests: We run e2e tests to ensure that the application's behavior is correct from the user's perspective.
- Accessibility Tests: We run checks to ensure that the application is accessible and that it meets the highest standards for accessibility compliance.
- Code Coverage: We check the test coverage to ensure that the application is well-tested and that new code is covered by tests.
We aim for high test coverage, especially in critical areas of the application, like user workflows or complex components. However, we prioritize user-centric testing over coverage numbers.
- Coverage Threshold: We aim for a test coverage of 95% for the unit tests. This threshold is set in the
.nycrc.json
file. - Coverage Reports: We use nyc to generate coverage reports, which are available
in the
coverage
folder after running the tests. These reports are also published to Coveralls with every pull request and merge. The coverage badge is displayed at the top of the README. - Tests included in the coverage: We include all unit tests in the coverage report.
- Pre-commit hook: We use pre-commit to run the unit tests before every commit, ensuring that no code is committed without passing the tests. It also runs the coverage checks to ensure that the coverage threshold is met.
To generate the code coverage, you first need to run the tests with the test:unit
script. After running the tests, you
can check the coverage with the test:coverage
script.
If you want to see the coverage report in the browser, you can open the coverage/lcov-report/index.html
file in the browser.
# root project directory
# Run the unit tests and generate the coverage report
npm run test:unit
# Check the coverage
npm run test:coverage
Once the site is built through the npm run build
command, it can be deployed in different ways to different types of
infrastructure, depending on the needs of the installation.
We are working to provide different preconfigured automated deployment options, seeking to support common use cases today for installing applications of this nature.
The current automated deployment options are available within the GitHub deploy
workflow, which is designed to be run
manually from GitHub Actions. The deployment option is selected via a dropdown menu, as well as the target environment.
Examples for AWS and Payara
This option will build and deploy the application to a remote S3 bucket.
For this workflow to work, a GitHub environment must be configured with the following environment secrets:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_S3_BUCKET_NAME
AWS_DEFAULT_REGION
Note that for the deployment to the S3 bucket to succeed, you must make the following changes to the bucket via the S3 web interface (or equivalent changes using aws-cli or similar tools):
- Under
Permissions
⏵Permissions Overview
⏵Block public access (bucket settings)
⏵ clickEdit
, then uncheckBlock all public access
and save. - Under
Properties
⏵Static website hosting
⏵ clickEdit
and enable it. ChangeIndex document
andError document
toindex.html
. - Under
Bucket Policy
, clickEdit
and paste the following policy (changing<BUCKET_NAME>
to your bucket name) and save.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Principal": "*",
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::<BUCKET_NAME>/*"]
}
]
}
You should see the deployed app at http://[BUCKET-NAME].s3-website-[REGION].amazonaws.com
, for example;
http://my-dataverse-bucket.s3-website-us-east-1.amazonaws.com/
This option will build and deploy the application to a remote Payara server.
This option is intended for an "all-in-one" solution, where the Dataverse backend application and the frontend application run on the same Payara server.
For this workflow to work, a GitHub environment must be configured with the following environment secrets:
PAYARA_INSTANCE_HOST
PAYARA_INSTANCE_USERNAME
PAYARA_INSTANCE_SSH_PRIVATE_KEY
It is important that the remote instance is correctly pre-configured, with the Payara server running, and a service account for Dataverse with the corresponding SSH key pair established.
A base path for the frontend application can be established on the remote server by setting the corresponding field in the workflow inputs. This mechanism prevents conflicts between the frontend application and any pre-existing deployed application running on Payara, which can potentially be a Dataverse backend. This way, only the routes with the base path included will redirect to the frontend application.
The Design System is published to the npm Package Registry. To publish a new version, follow these steps:
-
Update the version
Update the version running the lerna command:
lerna version --no-push
This command will ask you for the new version and will update the
package.json
files and create a new commit with the changes. -
Review the auto generated CHANGELOG.md
The lerna command will generate a new
CHANGELOG.md
file with the changes for the new version. Review the changes and make sure that the file is correct.If it looks good, you can push the changes to the repository.
git push && git push --tags
Optional:
If you need to make any changes to the
CHANGELOG.md
file, you can do it manually.After manually updating the
CHANGELOG.md
file, you can commit the changes.git add . git commit --amend --no-edit git push --force && git push --tags --force
This command will amend the lerna commit and push the changes to the repository.
-
Review the new tag in GitHub
After pushing the changes, you can review the new tag in the GitHub repository.
The tag should be created with the new version.
-
Publish the package
After the version is updated, you can publish the package running the lerna command:
lerna publish from-package
This command will publish the package to the npm registry.
Remember that you need a valid npm token to publish the packages.
Get a new token from the npm website and update the
.npmrc
file with the new token.Open the
.npmrc
file and replaceYOUR_NPM_TOKEN
with your actual npm token.legacy-peer-deps=true //npm.pkg.github.com/:_authToken=YOUR_NPM_TOKEN @iqss:registry=https://npm.pkg.github.com/
-
Review the new version in the npm registry
After publishing the packages, you can review the new version in the npm registry.
The new version should be available in the npm registry.