This document has instructions for running Mask R-CNN FP32 inference using Intel-optimized TensorFlow.
Download the MS COCO 2014 dataset.
Set the DATASET_DIR
to point to this directory when running Mask R-CNN.
# Create a new directory, to be set as DATASET_DIR
mkdir $DATASET_DIR
cd $DATASET_DIR
# Download and extract MS COCO 2014 dataset
wget http://images.cocodataset.org/zips/val2014.zip
unzip val2014.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip
unzip annotations_trainval2014.zip
cp annotations/instances_val2014.json annotations/instances_minival2014.json
export DATASET_DIR=${DATASET_DIR}
Script name | Description |
---|---|
fp32_inference.sh | Runs inference with batch size 1 using Coco dataset and pretrained model |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit | Setup without AI Kit |
---|---|
AI Kit does not currently support TF 1.15.2 models |
To run without AI Kit you will need:
|
Running Mask R-CNN also requires a clone and particular SHA of the
Mask R-CNN model repository.
Set the MODEL_SRC_DIR
env var to the path of your clone.
git clone https://github.com/matterport/Mask_RCNN.git
cd Mask_RCNN
git checkout 3deaec5d902d16e1daf56b62d5971d428dc920bc
export MODEL_SRC_DIR=$(pwd)
Download pre-trained COCO weights mask_rcnn_coco.h5)
from the
Mask R-CNN repository release page,
and place it in the MaskRCNN
directory.
wget -q https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5
cd ..
After your environment is setup, set environment variables to the DATASET_DIR
and an OUTPUT_DIR
where log files will be written. Ensure that you already have
the MODEL_SRC_DIR
path set from the previous commands.
Once the environment variables are all set, you can run the
quickstart script.
# cd to your model zoo directory
cd models
export DATASET_DIR=<path to the dataset>
export OUTPUT_DIR=<path to the directory where log files will be written>
export MODEL_SRC_DIR=<path to the Mask RCNN models repo with pre-trained model>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>
./quickstart/image_segmentation/tensorflow/maskrcnn/inference/cpu/fp32/fp32_inference.sh
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/mask-rcnn-fp32-inference-tensorflow-container.html.