This document has instructions for running SSD-MobileNet Int8 inference using Intel-optimized TensorFlow.
The COCO validation dataset is used in these SSD-Mobilenet quickstart scripts. The inference and accuracy quickstart scripts require the dataset to be converted into the TF records format. See the COCO dataset for instructions on downloading and preprocessing the COCO validation dataset.
Script name | Description |
---|---|
int8_inference.sh |
Runs inference on TF records and outputs performance metrics. |
int8_accuracy.sh |
Runs inference and checks accuracy on the results. |
multi_instance_batch_inference.sh |
A multi-instance run that uses all the cores for each socket for each instance with a batch size of 448 and synthetic data. |
multi_instance_online_inference.sh |
A multi-instance run that uses 4 cores per instance with a batch size of 1 and synthetic data. |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit on Linux | Setup without AI Kit on Linux | Setup without AI Kit on Windows |
---|---|---|
To run using AI Kit on Linux you will need:
|
To run without AI Kit on Linux you will need:
|
To run without AI Kit on Windows you will need:
|
For more information see the documentation on prerequisites in the TensorFlow models repo.
Download the pretrained model and set the PRETRAINED_MODEL
environment
variable to the path of the frozen graph. If you run on Windows, please use a browser to download the pretrained model using the link below.
For Linux, run:
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/ssdmobilenet_int8_pretrained_model_combinedNMS_s8.pb
export PRETRAINED_MODEL=$(pwd)/ssdmobilenet_int8_pretrained_model_combinedNMS_s8.pb
After installing the prerequisites and downloading the pretrained model, set the environment variables for the paths to your PRETRAINED_MODEL
, an OUTPUT_DIR
where log files will be written,
and DATASET_DIR
for COCO raw dataset directory or tf_records file based on whether you run inference or accuracy scripts.
Navigate to your model zoo directory and then run a quickstart script on either Linux or Windows.
# cd to your model zoo directory
cd models
export PRETRAINED_MODEL=<path to the pretrained model pb file>
export DATASET_DIR=<path to the coco tf record file>
export OUTPUT_DIR=<path to the directory where log files will be written>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>
./quickstart/object_detection/tensorflow/ssd-mobilenet/inference/cpu/int8/<script name>.sh
Using cmd.exe
, run:
# cd to your model zoo directory
cd models
set PRETRAINED_MODEL=<path to the pretrained model pb file>
set DATASET_DIR=<path to the coco tf record file>
set OUTPUT_DIR=<directory where log files will be written>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
set BATCH_SIZE=<customized batch size value>
bash quickstart\object_detection\tensorflow\ssd-mobilenet\inference\cpu\int8\<script name>.sh
Note: You may use
cygpath
to convert the Windows paths to Unix paths before setting the environment variables. As an example, if the dataset location on Windows isD:\user\coco_dataset\coco_val.record
, convert the Windows path to Unix as shown:cygpath D:\user\coco_dataset\coco_val.record /d/user/coco_dataset/coco_val.record
Then, set the
DATASET_DIR
environment variableset DATASET_DIR=/d/user/coco_dataset/coco_val.record
.
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/ssd-mobilenet-int8-inference-tensorflow-container.html.