🔥🔥🔥🔥🔥🔥Docker NVIDIA Docker2 YOLOV5 YOLOX YOLO Deepsort TensorRT ROS Deepstream Jetson Nano TX2 NX for High-performance deployment(高性能部署)
-
Updated
Sep 15, 2022 - Python
🔥🔥🔥🔥🔥🔥Docker NVIDIA Docker2 YOLOV5 YOLOX YOLO Deepsort TensorRT ROS Deepstream Jetson Nano TX2 NX for High-performance deployment(高性能部署)
This YOLOv5🚀😊 GUI road sign system uses MySQL💽, PyQt5🎨, PyTorch, CSS🌈. It has modules for login🔑, YOLOv5 setup📋, sign recognition🔍, database💾, and image processing🖼️. It supports diverse inputs, model switching, and enhancements like mosaic and mixup📈.
A PyTorch implementation of siamese networks using backbone from torchvision.models, with support for TensorRT inference.
jetson nano 部署 yolov5+TensorRT+Deepstream
Using TensorRT for Inference Model Deployment.
C++ TensorRT Implementation of NanoSAM
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Transform any wall to an intelligent whiteboard
Magface Triton Inferece Server Using Tensorrt
A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine
Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing
A miniature model of a self-driving car using deep learning
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
Search Engine on Shopee apply Image Search, Full-text Search, Auto-complete
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
部署量化库,适合pc,jetson,int8量化, yolov3/v4/v5
Run yolo V8 models as tensorrt engines natively for maximum performance 🏎️💨
Add a description, image, and links to the tensorrt-engine topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-engine topic, visit your repo's landing page and select "manage topics."