Skip to content
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.

Testing

jmorring edited this page May 10, 2022 · 1 revision

Overall View

As a team, we have decided to test our system through both Unit Testing and Acceptance/Black Box Testing as each of these different avenues of testing are able to ensure the proper functionality of different aspects of the code. The backend unit testing ensures that backend code functions properly. Along these lines, our system does not have frontend unit testing due to time limitations. Our team has worked with Black Box testing and Selenium testing during this semester. Those Black Box tests (outlined below) ensure that requirements have been satisfied from a user perspective. We consider these to be acceptable for our system; we have had a high level of success testing through unit testing frameworks, and our team has strong experience with acceptance testing from previous classes. Plus, it is an extremely feasible route to test user interaction with the system. The system can then be run locally to establish a coverage report of the system. There is test data for the tests that we have developed, and this can be seen outlined in the Acceptance testing section of the document.

Unit Testing

Note: Within our codebase, the testing suites are currently set up to be run after installing the required testing dependencies (this can be done through 'pip install -r requirements.txt'). Our team decided on a few common tools for Python to test our backend code that we have deemed testable in this regard. These tools include PyTest for the basic unit testing, and the inclusion of pytest-cov for the ability to output a coverage report. Using pytest-cov, the coverage report is outputted on the command line along with a traditional testing output when the pytest command is run in its basic form ('pytest --cov'). In terms of coverage of testable aspects, we plan to achieve an overall statement coverage of at least 80% for all code that we have considered to be testable. We feel that this is an adequate and feasible threshold for testing; at this threshold, it asserts a high level of confidence in the system meeting the requirements. Along with this, any coverage above this threshold on our database would be specifically of interest, as being able to store and reference stored data correctly is a crucial aspect of the functionality of our project. We feel that statement coverage is a more accurate representation of our codebase in terms of testing when compared to things like method coverage as statement coverage would allow us to better analyze the actions taking place in methods such as API calls, while method coverage may pass with a single interaction to a method, regardless of the outcome. While unit testing is something that we feel is applicable to the backend of our software, our team has acknowledged that attempting to get statement coverage on frontend code is something that is less practical; getting statement coverage for html and javascript code is much less straightforward, so we plan to cover these aspects of the codebase with the acceptance testing that we will implement both on a manual level and through automation tools such as Selenium.

With the current coverage, it can be seen that the codebase currently has a high level of coverage for most of the backend, with a few components close to the threshold coverage, with a minimum coverage in a file being 71%. These numbers would suggest that, with a small addition of tests to boost the coverage of files under our threshold, it would allow us to better assume that our code is behaving as expected. That being said, with a majority of the files being very close to 100% statement coverage, we can be fairly confident in our code’s operation. Additional testing may be difficult, as it may be difficult, and unimportant to reach edge cases that we don't anticipate our sponsor encountering.

Acceptance Testing

Our team plans to develop an acceptance testing plan that allows for us to do manual testing as a team while also being able to automate these in the future with tools such as Selenium so that others who may be interested in validating the behavior of the system with less knowledge of it can verify this in a faster way than running through them manually.

Prerequisites for all tests:

  • Ensure all Docker components are running as expected (in the main project directory, run 'docker-compose -f docker-compose.testing.yml up'). This will require that the variable "DISABLE_AUTH" is added to the .env file and that its value is set to "true".
  • Navigate to 'https://localhost'
  • This will take the user to the landing page of the system, which is the 'Query' page
  • Files outlined below are located on the system in the '/frontend/test/files' directory.
  • Ore.txt
    • The contents below are in the file, each title separated by a newline:

An Empirical Study on Type Annotations: Accuracy, Speed, and Suggestion Effectiveness Automated Object Manipulation Using Vision-Based Mobile Robotic System for Construction Applications Sensing Water Properties at Precise Depths from the Air Obtaining the Thermal Structure of Lakes from the Air Bringing Unmanned Aerial Systems Closer to the Environment Autonomous Aerial Water Sampling Phys: Probabilistic Physical Unit Assignment and Inconsistency Detection Assessing the Type Annotation Burden Dimensional Inconsistencies in Code and ROS Messages: A Study of 5.9M Lines of Code Phriky-Units: a Lightweight, Annotation-Free Physical Unit Inconsistency Detection Tool Lightweight Detection of Physical Unit Inconsistencies without Program Annotations Surface Classification for Sensor Deployment from UAV Landings On Air-to-Water Radio Communication between UAVs and Water Sensor Networks Controlled Sensor Network Installation with Unmanned Aerial Vehicles

  • BBTP DOCUMENT OMITTED *

Discussion

Looking at all of the testing done to the system, it is safe to assume that all components of the system behave as expected. This can be gathered from the fact that all tests pass, whether automated or run manually, both for frontend and backend components. Having also met our coverage threshold for the system, we can safely assume that there is not an area in the code that is not currently being tested. That being said, it is important to note that due to time constraints that not all functionality was tested through the Selenium tests. This can be attributed to the fact that some components (like the visualization graph from ToastUI) were not provided in a format that was modifiable and easy to manipulate for the webdriver. Another important consideration was that the database was not reset before each test, as the method of populating it simply checked for a test user existence and did not confirm contents, meaning that some tests may not run on their own or might fail when run out of order. A resolution to this would be to ensure that the user is deleted at the end of a suite of selenium tests per file and then created at the start of a new file to ensure that another test is not skewing the results of a currently running test.

Clone this wiki locally