Skip to content

Latest commit

 

History

History
17 lines (14 loc) · 1.25 KB

triton_server.md

File metadata and controls

17 lines (14 loc) · 1.25 KB

Prepare Triton Server For Native Inferencing

As mentioned in the README, the DeepStream LPR sample application should work as Triton client with Triton Server running natively for cAPIs. So the Triton Inference Server libraries should be installed in the machine. A easier way is to run the LPR sample application in the DeepStream Triton container.

Running DeepStream Triton container, takes the DeepStream 6.1 GA container as the example:

    docker run --gpus all -it  --ipc=host --rm -v /tmp/.X11-unix:/tmp/.X11-unix  -v $(pwd)/deepstream_lpr_app:/lpr   -e DISPLAY=$DISPLAY -w /lpr nvcr.io/nvidia/deepstream:6.1-triton

Inside the container, prepare model engines for Triton server, the tao-converter links inside the prepare_triton_us.sh or prepare_triton_ch.sh scripts can be changed to proper versions according to the actual TensorRT version:

    //For US car plate recognition
    ./prepare_triton_us.sh

    //For Chinese car plate recognition
    ./prepare_triton_ch.sh

Then the LPR sample application can be build and run inside this container.