From 29d12024cdaa3696997c8309d22f24c54cfc569c Mon Sep 17 00:00:00 2001 From: bendangnuksung Date: Wed, 22 Jan 2020 10:45:06 +0530 Subject: [PATCH] update readme --- README.md | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f5e7da5..2d7affb 100644 --- a/README.md +++ b/README.md @@ -24,20 +24,24 @@ config = your_custom_config_class ### Inferencing Follow once you finish converting it to a `saved_model` using the above code -#### Tensorflow Model Server with GRPC +#### Tensorflow Model Server with GRPC and RESTAPI 1. First run your `saved_model.pb` in Tensorflow Model Server, using: ```bash - tensorflow_model_server --port=8500 --model_name=mask --model_base_path=/path/to/saved_model/ + tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=mask --model_base_path=/path/to/saved_model/ ``` 2. Modify the variables and add your Config Class if needed in `inferencing/saved_model_config.py`. No need to change if the saved_model is the default COCO model. 3. Then run the `inferencing/saved_model_inference.py` with the image path: ```bash # Set Python Path export PYTHONPATH=$PYTHONPATH:$pwd - # Run Inference - python3 inferencing/saved_model_inference.py -p test_image/monalisa.jpg - ``` - - #### Please do send a PR if you know to inference using TF model server RESTAPI. - + + # Run Inference with GRPC + python3 inferencing/saved_model_inference.py -t grpc -p test_image/monalisa.jpg + + # Run Inference with RESTAPI + python3 inferencing/saved_model_inference.py -t restapi -p test_image/monalisa.jpg + ``` + +### Acknowledgement +Thanks to [@rahulgullan](https://github.com/rahulgullan) for RESTAPI client code. \ No newline at end of file