diff --git a/README.md b/README.md index ad4ff97..5453e9c 100644 --- a/README.md +++ b/README.md @@ -20,3 +20,21 @@ config = get_config() config = your_custom_config_class ``` + +### Inferencing +Follow once you finish converting it to a `saved_model` using the above code + +#### Tensorflow Model Server with GRPC + +1. First run your `saved_model.pb` in Tensorflow Model Server, using: + ```bash + tensorflow_model_server --port=8500 --model_name=mask --model_base_path=/path/to/saved_model/ + ``` +2. Modify the variables and add your Config Class if needed in `inferencing/saved_model_config.py`. No need to change if the saved_model is the default COCO model. +3. Then run the `inferencing/saved_model_inference.py` with the image path: + ```bash + # Set Python Path + export PYTHONPATH=$PYTHONPATH:$pwd + # Run Inference + python3 inferencing/saved_model_inference.py -p test_image/monalisa.jpg + ``` \ No newline at end of file