This document has instructions for running 3D U-Net FP32 inference using Intel-optimized TensorFlow.
The following instructions are based on BraTS2018 dataset preprocessing steps in the 3D U-Net repository.
-
Download BraTS2018 dataset. Please follow the steps to register and request the training and the validation data of the BraTS 2018 challenge.
-
Create a virtual environment and install the dependencies:
# create a python3.6 based venv virtualenv --python=python3.6 brats18_env . brats18_env/bin/activate # install dependencies pip install intel-tensorflow==1.15.2 pip install SimpleITK===1.2.0 pip install keras==2.2.4 pip install nilearn==0.6.2 pip install tables==3.4.4 pip install nibabel==2.3.3 pip install nipype==1.7.0 pip install numpy==1.16.3
Install ANTs N4BiasFieldCorrection and add the location of the ANTs binaries to the PATH environmental variable:
wget https://github.com/ANTsX/ANTs/releases/download/v2.1.0/Linux_Debian_jessie_x64.tar.bz2 tar xvjf Linux_Debian_jessie_x64.tar.bz2 cd debian_jessie export PATH=${PATH}:$(pwd)
-
Clone the 3D U-Net repository, and run the script for the dataset preprocessing:
git clone https://github.com/ellisdg/3DUnetCNN.git cd 3DUnetCNN git checkout update_to_brats18 # add the repository directory to the PYTHONPATH system variable export PYTHONPATH=${PWD}:$PYTHONPATH
After downloading the dataset file
MICCAI_BraTS_2018_Data_Training.zip
(from step 1), place the unzipped folders in thebrats/data/original
directory.# extract the dataset mkdir -p brats/data/original && cd brats unzip MICCAI_BraTS_2018_Data_Training.zip -d data/original # import the conversion function and run the preprocessing: python >>> from preprocess import convert_brats_data >>> convert_brats_data("data/original", "data/preprocessed") # run training using the original UNet model to get `validation_ids.pkl` created in `brats` directory. python train.py
After it finishes, set an environment variable to the path that contains the preprocessed dataset file validation_ids.pkl
.
export DATASET_DIR=/home/<user>/3DUnetCNN/brats
Script name | Description |
---|---|
fp32_inference.sh | Runs inference with a batch size of 1 using the BraTS dataset and a pretrained model |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit | Setup without AI Kit |
---|---|
AI Kit does not currently support TF 1.15.2 models |
To run without AI Kit you will need:
|
Download the pre-trained model from the
3DUnetCNN
repository. In this example, we are using the "Original U-Net" model, trained using the BRATS 2017 data.
Set the the PRETRAINED_MODEL
env var as the path to the tumor_segmentation_model.h5 file.
wget https://www.dropbox.com/s/m99rqxunx0kmzn7/tumor_segmentation_model.h5
export PRETRAINED_MODEL=$(pwd)/tumor_segmentation_model.h5
After your environment is setup, set environment variables to the DATASET_DIR
and an OUTPUT_DIR
where log files will be written. Ensure that you already have
the PRETRAINED_MODEL
path set from the previous command.
Once the environment variables are all set, you can run the
quickstart script.
# cd to your model zoo directory
cd models
export DATASET_DIR=<path to the dataset>
export OUTPUT_DIR=<path to the directory where log files will be written>
export PRETRAINED_MODEL=<path to the pretrained model>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>
./quickstart/image_segmentation/tensorflow/3d_unet/inference/cpu/fp32/fp32_inference.sh
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/3d-unet-fp32-inference-tensorflow-container.html.