- Vehicle detector
- Vehicle plate detector and recognizer
- Vehicle scanner based on side view of body
- Vehicle color and type classifier
- Vehicle feature encoder used for search(1:N) or comparison(1:1)
Refer to NVIDIA official web
- download files according to your GPUs (below for this repo)
cuda_11.1.0_455.23.05_linux.run
,cudnn-11.1-linux-x64-v8.0.5.39.tgz
,TensorRT-7.2.1.6.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.0.tar.gz
- run
./cuda_11.1.0_455.23.05_linux.run
install cuda(at/usr/local
) and driver, maybe need to reboot machine some times - upzip
cudnn-11.1-linux-x64-v8.0.5.39.tgz
, copy all header files to '/usr/local/cuda/include' and copy all lib files to '/usr/local/cuda/lib64' - unzip
TensorRT-7.2.1.6.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.0.tar.gz
at/usr/local
, create softlink byln -s /usr/local/TensorRT-7.2.1.6 /usr/local/tensorRT
- add
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/tensorRT/lib:/usr/local/cuda/lib64:/usr/local/lib
,export CPATH=$CPATH:/usr/local/cuda/include:/usr/local/tensorRT/include
to~/.bashrc
- run
source ~/.bashrc
CUDA 11.1 + TensorRT 7.2.1 for this repository (tested)
CUDA 11.1 + TensorRT 8.5 for this repository (tested)
trtexec --onnx=./vehicle.onnx --saveEngine=vehicleXXX.trt --buildOnly=true
we can build trt_vehicle separately.
- set the right library path and include path for TensorRT in
CMakeLists.txt
mkdir build && cd build
cmake ..
make -j8
all lib files saved to build/libs
, all samples saved to build/samples
. please refer to videopipe about how to run samples for trt_vehicle.