Skip to content

Commit

Permalink
tests completed
Browse files Browse the repository at this point in the history
  • Loading branch information
Proteusiq committed Dec 22, 2020
1 parent 057db9d commit deacca0
Show file tree
Hide file tree
Showing 10 changed files with 71 additions and 95 deletions.
2 changes: 1 addition & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# FastAPI
IS_DEBUG=False
API_KEY=exampe_key
API_KEY=example_key
DEFAULT_MODEL_PATH=./ml_model/models
# Hugging FaceModel
QUESTION_ANSWER_MODEL=deepset/roberta-base-squad2
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM python:3.8.2

ENV PYTHONUNBUFFERED 1

EXPOSE 80
EXPOSE 8000
WORKDIR /app

COPY requirements.txt ./
Expand All @@ -12,4 +12,4 @@ RUN pip install --upgrade pip && \
COPY . ./

ENV PYTHONPATH huggingfastapi
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
61 changes: 33 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,25 @@ Project structure for development and production.
Installation and setup instructions to
run the development mode model and serve a local RESTful API endpoint.

## Project structure

Files related to application are in the `huggingfastapi` or `tests` directories.
Application parts are:

huggingfastapi
├── api - Main API.
│   └── routes - Web routes.
├── core - Application configuration, startup events, logging.
├── models - Pydantic models for api.
├── services - NLP logics.
└── main.py - FastAPI application creation and configuration.
tests - Codes without test are is an illusion


## Requirements

Python 3.6+
Python 3.7+

## Installation
Install the required packages in your local environment (ideally virtualenv, conda, etc.).
Expand All @@ -25,36 +41,37 @@ source venv/bin/activate
make install
```

## Runnning Localhost

`make run`

## Deploy app
#### Runnning Localhost

`make deploy`

## Running Tests
```sh
make run
```

`make test`
#### Deploy app

## Runnning Easter Egg
```sh
make deploy
```

`make easter`
#### Running Tests

```sh
make test
```

## Setup
1. Duplicate the `.env.example` file and rename it to `.env`


2. In the `.env` file configure the `API_KEY` entry. The key is used for authenticating our API. <br>
Execute script to generate .env, and replace `example_key`with the UUID generated:

```bash
make generate_dot_env
python -c "import uuid;print(str(uuid.uuid4()))"

```


## Run It

1. Start your app with:
Expand All @@ -81,19 +98,7 @@ tox

This runs tests and coverage for Python 3.8 and Flake8, Bandit.

## Project structure

Files related to application are in the `huggingfastapi` or `tests` directories.
Application parts are:

huggingfastapi
├── huggingfastapi - web related stuff.
│   └── routes - web routes.
├── core - application configuration, startup events, logging.
├── models - pydantic models for api.
├── services - logic that is not just crud related.
└── main.py - FastAPI application creation and configuration.
tests - pytest

# TODO
[ ] Change make to invoke
[ ] Add endpoint for uploading text file and questions
47 changes: 0 additions & 47 deletions huggingfastapi/services/ml_downloader.py

This file was deleted.

2 changes: 1 addition & 1 deletion huggingfastapi/services/nlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _pre_process(self, payload: QAPredictionPayload) -> List:
def _post_process(self, prediction: Dict) -> QAPredictionResult:
logger.debug("Post-processing prediction.")

qa = QuestionPredictionResult(**prediction)
qa = QAPredictionResult(**prediction)

return qa

Expand Down
14 changes: 12 additions & 2 deletions ml_model/model_description.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
## Model Description
## NLP Model Description

### Generate Questions or Answers given Text
Model: https://huggingface.co/deepset/roberta-base-squad2

## Authors
Branden Chan: branden.chan [at] deepset.ai Timo Möller: timo.moeller [at] deepset.ai Malte Pietsch: malte.pietsch [at] deepset.ai Tanay Soni: tanay.soni [at] deepset.ai

## Settings
Language model: roberta-base
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0


3 changes: 1 addition & 2 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,7 @@
from starlette.testclient import TestClient


environ["API_KEY"] = "72c8e5a5-bb07-4c35-8115-f0c4c60eb790"
environ["DEFAULT_MODEL_PATH"] = "./ml_model/model.pkl"
environ["API_KEY"] = "example_key"


from huggingfastapi.main import get_app # noqa: E402
Expand Down
6 changes: 4 additions & 2 deletions tests/test_api/test_api_auth.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,16 @@


def test_auth_using_prediction_api_no_apikey_header(test_client) -> None:
response = test_client.post("/api/model/predict")
response = test_client.post("/api/v1/question")
assert response.status_code == 400
assert response.json() == {"detail": messages.NO_API_KEY}


def test_auth_using_prediction_api_wrong_apikey_header(test_client) -> None:
response = test_client.post(
"/api/model/predict", json={"image": "test"}, headers={"token": "WRONG_TOKEN"}
"/api/v1/question",
json={"context": "test", "question": "test"},
headers={"token": "WRONG_TOKEN"},
)
assert response.status_code == 401
assert response.json() == {"detail": messages.AUTH_REQ}
18 changes: 12 additions & 6 deletions tests/test_api/test_prediction.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,22 @@

def test_prediction(test_client) -> None:
response = test_client.post(
"/api/model/predict",
json={"text": "hello world!", "model_version": "0.1.0"},
headers={"token": str(config.API_KEY)},
"/api/v1/question",
json={"context": "two plus two equal four", "question": "What is four?"},
headers={"token": "example_key"},
)
assert response.status_code == 200
assert "model_version" in response.json()
assert "score" in response.json()


def test_prediction_nopayload(test_client) -> None:
response = test_client.post(
"/api/model/predict", json={}, headers={"token": str(config.API_KEY)}
"/api/v1/question", json={}, headers={"token": "example_key"}
)
assert response.status_code == 422
## if nopayload, default Hitchhiker's Guide to the Galaxy example is sent
# context:"42 is the answer to life, the universe and everything."
# question:"What is the answer to life?"
# if no default, assert response.status_code == 422

data = response.json()
assert data["answer"] == "42"
9 changes: 5 additions & 4 deletions tests/test_service/test_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,19 @@
from huggingfastapi.core import config
from huggingfastapi.models.payload import QAPredictionPayload
from huggingfastapi.models.prediction import QAPredictionResult
from huggingfastapi.services.models import QAModel
from huggingfastapi.services.nlp import QAModel


def test_prediction(test_client) -> None:
model_path = config.DEFAULT_MODEL_PATH
tpp = QAPredictionPayload.parse_obj(
{"text": "hello world!", "model_version": "0.1.0"}
qa = QAPredictionPayload.parse_obj(
{"context": "two plus two equal four", "question": "What is four?"}
)

tm = QAModel(model_path)
result = tm.predict(tpp)
result = tm.predict(qa)
assert isinstance(result, QAPredictionResult)
assert result.answer == "two plus two"


def test_prediction_no_payload(test_client) -> None:
Expand Down

0 comments on commit deacca0

Please sign in to comment.