This document is used to enable Tensorflow Keras model Xception quantization and benchmark using Intel® Neural Compressor. This example can run on Intel CPUs and GPUs.
# Install Intel® Neural Compressor
pip install neural-compressor
The Tensorflow and intel-extension-for-tensorflow is mandatory to be installed to run this example. The Intel Extension for Tensorflow for Intel CPUs is installed as default.
pip install -r requirements.txt
Note: Validated TensorFlow Version.
The pretrained model is provided by Keras Applications. prepare the model, Run as follow:
python prepare_model.py --output_model=/path/to/model
--output_model
the model should be saved as SavedModel format or H5 format.
TensorFlow models repo provides scripts and instructions to download, process and convert the ImageNet dataset to the TF records format.
We also prepared related scripts in imagenet_prepare
directory. To download the raw images, the user must create an account with image-net.org. If you have downloaded the raw data and preprocessed the validation data by moving the images into the appropriate sub-directory based on the label (synset) of the image. we can use below command ro convert it to tf records format.
cd examples/keras/image_recognition/
# convert validation subset
bash prepare_dataset.sh --output_dir=./xception/quantization/ptq/data --raw_dir=/PATH/TO/img_raw/val/ --subset=validation
# convert train subset
bash prepare_dataset.sh --output_dir=./xception/quantization/ptq/data --raw_dir=/PATH/TO/img_raw/train/ --subset=train
cd xception/quantization/ptq
Note: The raw ImageNet dataset resides in JPEG files should be in the following directory structure. Taking validation set as an example:
/PATH/TO/img_raw/val/n01440764/ILSVRC2012_val_00000293.JPEG
/PATH/TO/img_raw/val/n01440764/ILSVRC2012_val_00000543.JPEG
where 'n01440764' is the unique synset label associated with these images.
The Quantization Config class has default parameters setting for running on Intel CPUs. If running this example on Intel GPUs, the 'backend' parameter should be set to 'itex' and the 'device' parameter should be set to 'gpu'.
config = PostTrainingQuantConfig(
device="gpu",
backend="itex",
...
)
bash run_quant.sh --input_model=./xception_keras/ --output_model=./result --dataset_location=/path/to/evaluation/dataset
bash run_benchmark.sh --input_model=./result --mode=accuracy --dataset_location=/path/to/evaluation/dataset --batch_size=32
bash run_benchmark.sh --input_model=./result --mode=performance --dataset_location=/path/to/evaluation/dataset --batch_size=1