(support-matrix)=
TensorRT-LLM optimizes the performance of a range of well-known models on NVIDIA GPUs. The following sections provide a list of supported GPU architectures as well as important features implemented in TensorRT-LLM.
- Arctic
- Baichuan/Baichuan2
- BART
- BERT
- BLOOM
- ByT5
- GLM/ChatGLM/ChatGLM2/ChatGLM3/GLM-4
- Code LLaMA
- DBRX
- Exaone
- FairSeq NMT
- Falcon
- Flan-T5 1
- Gemma/Gemma2
- GPT
- GPT-J
- GPT-Nemo
- GPT-NeoX
- Grok-1
- InternLM
- InternLM2
- LLaMA/LLaMA 2/LLaMA 3/LLaMA 3.1
- Mamba
- mBART
- [Minitron] (https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/nemotron)
- Mistral
- Mistral NeMo
- Mixtral
- MPT
- Nemotron
- mT5
- OPT
- Phi-1.5/Phi-2/Phi-3
- Qwen/Qwen1.5/Qwen2
- Qwen-VL
- RecurrentGemma
- Replit Code[^ReplitCode]
- RoBERTa
- SantaCoder
- Skywork
- Smaug
- StarCoder
- T5
- Whisper
Multi-Modal Models 2
- BLIP2 w/ OPT
- BLIP2 w/ T5
- CogVLM 3
- Deplot
- Fuyu
- Kosmos
- LLaVA-v1.5
- LLaVa-Next
- LLaVa-OneVision
- NeVA
- Nougat
- Phi-3-vision
- Video NeVA
- VILA
(support-matrix-hardware)=
The following table shows the supported hardware for TensorRT-LLM.
If a GPU architecture is not listed, the TensorRT-LLM team does not develop or test the software on the architecture and support is limited to community support. In addition, older architectures can have limitations for newer software releases.
:header-rows: 1
:widths: 20 80
* -
- Hardware Compatibility
* - Operating System
- TensorRT-LLM requires Linux x86_64 or Windows.
* - GPU Model Architectures
-
- [NVIDIA Hopper Architecture](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/)
- [NVIDIA Ada Lovelace Architecture](https://www.nvidia.com/en-us/technologies/ada-architecture/)
- [NVIDIA Ampere Architecture](https://www.nvidia.com/en-us/data-center/ampere-architecture/)
(support-matrix-software)=
The following table shows the supported software for TensorRT-LLM.
:header-rows: 1
:widths: 20 80
* -
- Software Compatibility
* - Container
- [24.10](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
* - TensorRT
- [10.6](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html)
* - Precision
-
- Hopper (SM90) - FP32, FP16, BF16, FP8, INT8, INT4
- Ada Lovelace (SM89) - FP32, FP16, BF16, FP8, INT8, INT4
- Ampere (SM80, SM86) - FP32, FP16, BF16, INT8, INT4[^smgte89]
[^ReplitCode]:Replit Code is not supported with the transformers 4.45+.
Support for FP8 and quantized data types (INT8 or INT4) is not implemented for all the models. Refer to {ref}`precision` and [examples](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples) folder for additional information.
Footnotes
-
Encoder-Decoder provides general encoder-decoder functionality that supports many encoder-decoder models such as T5 family, BART family, Whisper family, NMT family, and so on. ↩
-
Multi-modal provides general multi-modal functionality that supports many multi-modal architectures such as BLIP2 family, LLaVA family, and so on. ↩
-
Only supports bfloat16 precision. ↩