This repository contains code for unit testing a sentiment analysis model as part of a Continuous Integration and Continuous Deployment (CI/CD) pipeline. The unit tests ensure that the sentiment analysis model is functioning correctly and producing accurate results. The tests cover various scenarios, including single predictions, batch predictions, and explainable predictions.
These instructions will guide you through setting up the unit testing environment and running the tests.
Before you begin, ensure you have the following installed:
- Python (>= 3.11)
- Asyncio library
-
Clone the repository to your local machine:
git clone <repository_url> cd <repository_directory>
-
Install the required dependencies:
pip install -r requirements.txt
To run the unit tests, execute the following command in your terminal:
python test_sentiment_analysis.py
The unit tests are performed using the testing
function defined in the test_sentiment_analysis.py
file. The function utilizes the sentiment analysis model to perform various tests and checks the accuracy of the predictions. The tests cover the following scenarios:
-
Single Prediction:
- Test sentiment prediction for the input text "I hate this movie".
- Verify if the predicted sentiment is "Negative" and the probability is above 0.7.
-
Single Prediction (Positive):
- Test sentiment prediction for the input text "I like this movie".
- Verify if the predicted sentiment is "Positive" and the probability is above 0.7.
-
Batch Prediction:
- Test sentiment predictions for a batch of texts containing both negative and positive sentiments.
- Verify if the predictions match the expected sentiments for each input text and if the probabilities are above 0.7.
-
Explainable Prediction (Positive):
- Test explainable sentiment prediction for the input text "I like this movie".
- Verify if the predicted sentiment is "Positive" and the probability is above 0.7.
-
Explainable Prediction (Negative):
- Test explainable sentiment prediction for the input text "I hate this movie".
- Verify if the predicted sentiment is "Negative" and the probability is above 0.7.
If any of the tests fail, an exception will be raised, indicating that at least one test has failed.
The unit tests in this repository are designed to be integrated into a CI/CD pipeline. The pipeline can be configured to automatically execute the tests whenever changes are made to the sentiment analysis model or its dependencies. This helps ensure that any changes to the model or codebase do not introduce regressions or break existing functionality.