This project demonstrates how to do human-in-the-loop ML model monitoring using Toloka.
- Download model checkpoint:
curl -o models/model.ckpt 'https://tlk.s3.yandex.net/research/toloka_monitoring/model.ckpt'
pip install -e .
- Create config file from template:
cp toloka_monitoring/_template_config.py toloka_monitoring/config.py
- Put your Toloka requester API token in
toloka_monitoring/config.py
TOLOKA_API_TOKEN=<your token>
- Run script to setup project in Toloka:
python toloka_monitoring/setup_toloka_project.py
The script will print TOLOKA_PROJECT_ID
Put it into toloka_monitoring/config.py
.
- Start the API and make predictions:
python toloka_monitoring
Make predictions using API docs: http://localhost:8000/
Alternatively via console:
curl -X 'POST' \
'http://localhost:8000/model/' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"image_url": "<image url>"
}'
- With the API running, make example predictions and compute metrics for demo:
python make_example_predictions.py
-
Check metric charts: http://localhost:8000/monitoring
-
(Optinal) Train a better model by running
notebooks/train_models.ipynb
. You will need a GPU.