This repository has been archived by the owner on Dec 8, 2021. It is now read-only.
Make DeepLabV3+MobileNetV3 Backbone runnable with trt on NVIDIA Xavier #7
Labels
enhancement
New feature or request
We need to get Bernhard's DeepLabV3+MobileNetV3 runnable with trt on NVIDIA Xavier to be able to utilize the hardware better.
For native trt, the model has to be converted from H5 to ONNX
The execution environment shall be the same as for object detection, using the latency.csv and performance.csv as exchange formats
The text was updated successfully, but these errors were encountered: