From 50dfe938e56670dfbeadc2c6bca49951b5025a2e Mon Sep 17 00:00:00 2001 From: t-parry <146764540+t-parry@users.noreply.github.com> Date: Wed, 11 Dec 2024 18:15:50 -0800 Subject: [PATCH] Update README.md Updated to move to ROCm 6.3 and post the issue with saving Tunable Ops due to PyTorch bug. --- docs/dev-docker/README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/docs/dev-docker/README.md b/docs/dev-docker/README.md index 9bc7e1f86f508..11c0ef04fd8f7 100644 --- a/docs/dev-docker/README.md +++ b/docs/dev-docker/README.md @@ -10,11 +10,11 @@ This documentation shows some reference performance numbers and the steps to rep It includes: - - ROCmâ„¢ 6.2.2 + - ROCmâ„¢ 6.3 - vLLM 0.6.3 - - PyTorch 2.5dev (nightly) + - PyTorch 2.6dev (nightly) ## System configuration @@ -23,7 +23,7 @@ The performance data below was measured on a server with MI300X accelerators wit | System | MI300X with 8 GPUs | |---|---| | BKC | 24.13 | -| ROCm | version ROCm 6.2.2 | +| ROCm | version ROCm 6.3 | | amdgpu | build 2009461 | | OS | Ubuntu 22.04 | | Linux Kernel | 5.15.0-117-generic | @@ -45,9 +45,8 @@ You can pull the image with `docker pull rocm/vllm-dev:main` ### What is New - - MoE optimizations for Mixtral 8x22B, FP16 - - Llama 3.2 stability improvements - - Llama 3.3 support + - ROCm 6.3 support + - Potential bug with Tunable Ops not saving due to a PyTorch issue Gemms are tuned using PyTorch's Tunable Ops feature (https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/tunable/README.md)