-
Notifications
You must be signed in to change notification settings - Fork 1
Automated System Tests
vAirify has a suite of automated system tests that can be run against the most main branch of the code locally. There is one for the backend API and scripts and another for the UI dashboard.
These tests provide a quick and easy way to ensure that the various processes within vAirify are working as expected. For example, if changes have been made to the UI, run the Playwright tests to ensure everything is still working as expected and no regression has been introduced.
This allows early detection of anything that may have broken before it is subsequently deployed. The tests have been written to cover specific risks, or to verify the acceptance criteria of features we have developed are met
There are two suites, they can be found in the following locations:
-
- The suite for the dashboard can be found in the
system_tests
directory in theair-quality-ui
directory.
- The suite for the dashboard can be found in the
-
- The suite for the backend elements of vAirify can be found in the
system_tests
directory inair-quality-backend
.
- The suite for the backend elements of vAirify can be found in the
These test frameworks should be run when you have finished changing parts of vAirify and before the code is deployed.
To run the suites follow the steps below:
The UI dashboard tests use the Playwright
framework. This allows you to automate the UI and assert values and components are correctly displayed. It will need to be installed locally, follow the directions in their documentation.
To run all tests in a headless manner, from the air-quality-ui
directory run the following CLI command:
npx playwright test
when the tests have completed a webpage will load to show the results of each test.
If you want to run individual tests from a UI, add the ui
flag and a UI will load that allow you to select individual tests to run. This has many features to visualise the test execution and explore network calls or errors.
npx playwright test --ui
At times you may want to debug tests so you can visually see the test flow and where they are failing. The debug flag will open up a browser and a playwright inspector. This will let you "step over" which executes each test step sequentially allowing you to see the exact moment of failure
npx playwright test --debug
To run the backend system tests, ensure that the conda environment (the same as required to run the ETL scripts) is activated.
A local database for the tests to execute against is recommended as several tests seed data to the database. To create one, set up a local MongoDB collection and follow the instructions in the deployment\database\liquibase\liquibase.md
.
Once established, navigate to the .env
of the air-quality-backend
- this should already be set up following the instructions in the backend README, however needs modifying:
- verify
MONGO_DB_URI=mongodb://localhost:27017
is set - set the
MONGO_DB_NAME
to the name of the collection that will be used for running the tests against. - set
FORECAST_RETRIEVAL_PERIOD=0
- stops the tests requesting 7 days of data to fill in missing gaps in database - faster! - set
IN_SITU_RETRIEVAL_PERIOD=0
- stops the tests requesting 7 days of data to fill in missing gaps in database - faster!
Then from the root directory for the backend (air-quality-backend
) run the following command in your terminal:
python -m pytest .\system_tests\
The results should be printed in the terminal showing the which tests passed or failed.
Alternatively, from within PyCharm you can right click on the folder system_tests
and click 'Run'. You will need to ensure you are using the correct Python interpreter and that the working directory is set to air-quality-backend
. To do this:
- Navigate to
system_tests
in the backend in PyCharm. - Right click on the folder and select 'Modify Run Configuration'
- Ensure the working directory is set to
\air-quality-backend
Getting Started and Overview
- Product Description
- Roles and Responsibilities
- User Roles and Goals
- Architectural Design
- Iterations
- Decision Records
- Summary Page Explanation
- Deployment Guide
- Working Practices
- Q&A
Investigations and Notebooks
- CAMs Schema
- Exploratory Notebooks
- Forecast ETL Process
- In Situ air pollution data sources
- Notebook: OpenAQ data overview
- Notebook: Unit conversion
- Data Archive Considerations
Manual Test Charters
- Charter 1 (Comparing ECMWF forecast to database values)
- Charter 2 (Backend performance)
- Charter 3 (Forecast range implementation)
- Charter 4 (In situ bad data)
- Charter 5 (Filtering ppm units)
- Charter 7 (Forecast API input validation)
- Charter 8 (Forecast API database sizes)
- Charter 9 (Measurements summary API input validation)
- Charter 10 (Seeding bad data)
- Charter 11 ()Measurements API input validation
- Charter 12 (Validating echart plot accuracy)
- Charter 13 (Explore UI after data outage)
- Charter 14 (City page address)
- Charter 15 (BugFix diff 0 calculation)
- Charter 16 (City page chart data mocking)
- Charter 17 (Summary table logic)
- Charter 18 (AQI chart colour banding)
- Charter 19 (City page screen sizes)
- Charter 20 (Date picker)
- Charter 21 (Graph consistency)
- Charter 22 (High measurement values)
- Charter 23 (ppm -> µg m³)
- Charter 24 (Textures API input validation)
- Charter 25 (Graph line colours)
- Charter 26 (Fill in gaps in forecast)
- Charter 27 (Graph behaviour with mock data)
- Charter 28 (Summary table accuracy)
- Re‐execute: Charter 28
- Charter 29 (Fill in gaps in situ)
- Charter 30 (Forecast window)
- Charter 31 (UI screen sizes)