This repository provides five key demonstrations:
-
The implementation is adapted from the Demo of PointPillars Optimization, showcasing how to implement and optimize PointPillars on Intel platforms using OpenVINO™. The original code sources are OpenPCDet, which sets up the PointPillars pipeline demo, and SmallMunich, which converts the PointPillars PyTorch model to ONNX format. For more technical details, refer to the Optimization of PointPillars by using Intel® Distribution of OpenVINO™ Toolkit.
-
The repository supports Intel MTL iGPU and Arc770 dGPU platforms.
-
It adds an INT8 quantization method for RPN and PFE models.
-
Scatter latency statistics are included in the demo outputs.
-
Validation has been conducted on Intel MTL iGPU and Arc770 dGPU devices.
This document provides detailed instructions for setting up and running the PointPillars OpenVINO™ Demo on Intel GPU. The sections below include hardware and software requirements, demo setup steps, and quantization guidance.
Choose one of the following hardware setups:
- Intel MTL with iGPU
- Intel Arc770 dGPU + Core CPU
- Ubuntu 22.04
- Linux Kernel 6.5.0-18-generic
- Python 3.10
- OpenVINO™ 2024.3
Refer to the compute-runtime releases.
Refer to the Intel Arc GPU documentation.
sudo apt update
sudo apt-get install python3-dev
sudo apt install build-essential
cd /home/shawn
mkdir project
git clone https://github.com/shawn9977/PointPillars-Demo.git
cd PointPillars-Demo
python3 -m venv env_PointPillars
source env_PointPillars/bin/activate
pip install openvino-dev nncf
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
cd /home/shawn/project/PointPillars-Demo
pip install -r requirements.txt
Refer to SmallMunich's repo for dataset generation.
Note: This step requires an NVIDIA GPU environment. Alternatively, you can skip this step as pre-generated datasets are included in this repository under PointPillars-Demo/main/datasets/training/velodyne_reduced/
.
Set the dataset path environment variable:
export my_dataset_path=<your_dataset_folder>/training/velodyne_reduced
# Example:
export my_dataset_path=/home/shawn/project/PointPillars-Demo/datasets/training/velodyne_reduced
python setup.py develop
cd /home/shawn/project/PointPillars-Demo/tools/
python demo.py --cfg_file pointpillar.yaml --num -1 --data_path $my_dataset_path
# If `my_dataset_path` is not set, run:
python demo.py --cfg_file pointpillar.yaml --num 100 --data_path /home/shawn/project/PointPillars-Demo/datasets/training/velodyne_reduced
Demo outputs include performance metrics:
INFO -----------------Quick Demo of OpenPCDet-------------------------
INFO Loading the dataset and model.
INFO Number of samples in dataset: xxx
INFO ------Run number of samples: xxx in mode: balance
INFO Total: xxx seconds
INFO FPS: xxx
INFO Latency: xxx milliseconds
INFO Scatter latency: xxx milliseconds
INFO Demo done.
Complete steps 1-6 from the demo setup.
Edit quant.py
to set the dataset path:
DATA_PATH = "/home/shawn/project/PointPillars-Demo/datasets/training/velodyne_reduced"
Run quant.py
to save the quantized model (quantized_pfe.xml
) in the tools
directory:
cd /home/shawn/project/PointPillars-Demo/tools/
python quant.py