From 005cf30fc7ac793af47cfbb1a3868e0e102228ce Mon Sep 17 00:00:00 2001 From: youkaichao Date: Mon, 6 Jan 2025 21:41:06 +0800 Subject: [PATCH 1/8] stash Signed-off-by: youkaichao --- .../getting_started/installation/gpu-cuda.md | 31 ++++++++++++++----- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 7ea10bb8b59ff..94b0385e9651b 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -12,19 +12,39 @@ vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) bin ## Install released versions -You can install vLLM using pip: +### Create a new Python environment + +You can create a new Python environment using `conda`: ```console $ # (Recommended) Create a new conda environment. $ conda create -n myenv python=3.12 -y $ conda activate myenv +``` + +Or you can create a new Python environment using [uv](https://docs.astral.sh/uv/), a very fast Python environment manager. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following command: + +```console +$ # (Recommended) Create a new uv environment. Use `--seed` to install `pip` and `setuptools` in the environment. +$ uv venv myenv --python 3.12 --seed +``` + +In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations. +Therefore, it is recommended to install vLLM with a **fresh new** environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See [below](#build-from-source) for more details. + +### Install vLLM + +You can install vLLM using `pip` or `uv pip`: + +```console $ # Install vLLM with CUDA 12.1. -$ pip install vllm +$ pip install vllm # If you are using pip. +$ uv pip install vllm # If you are using uv. ``` ```{note} -Although we recommend using `conda` to create and manage Python environments, it is highly recommended to use `pip` to install vLLM. This is because `pip` can install `torch` with separate library packages like `NCCL`, while `conda` installs `torch` with statically linked `NCCL`. This can cause issues when vLLM tries to use `NCCL`. See for more details. +Please do not use `conda` to install `vllm`. `conda` installs `torch` with statically linked `NCCL`. This can cause issues when vLLM tries to use `NCCL`. See for more details. ``` ````{note} @@ -38,11 +58,6 @@ $ export PYTHON_VERSION=310 $ pip install https://github.com/vllm-project/vllm/releases/download/v${VLLM_VERSION}/vllm-${VLLM_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118 ``` -In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations. - -Therefore, it is recommended to install vLLM with a **fresh new** conda environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See below for instructions. -```` - (install-the-latest-code)= ## Install the latest code From 3a00cb5d99f3e8dbcad6af08f4fe9d818535faa8 Mon Sep 17 00:00:00 2001 From: youkaichao Date: Mon, 6 Jan 2025 21:58:06 +0800 Subject: [PATCH 2/8] add Signed-off-by: youkaichao --- .../getting_started/installation/gpu-cuda.md | 25 ++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 94b0385e9651b..cc33aae4d36ea 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -62,7 +62,9 @@ $ pip install https://github.com/vllm-project/vllm/releases/download/v${VLLM_VER ## Install the latest code -LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since `v0.5.3`. You can download and install it with the following command: +LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since `v0.5.3`. + +### Install the latest code using `pip` ```console $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl @@ -77,6 +79,27 @@ $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm Note that the wheels are built with Python 3.8 ABI (see [PEP 425](https://peps.python.org/pep-0425/) for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (`1.0.0.dev`) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata. Although we don't support Python 3.8 any more (because PyTorch 2.5 dropped support for Python 3.8), the wheels are still built with Python 3.8 ABI to keep the same wheel name as before. +Due to the limitation of `pip`, you have to specify the full URL of the wheel file + +### Install the latest code using `uv` + +Another way to install the latest code is to use `uv`: + +```console +$ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly +``` + +If you want to access the wheels for previous commits, you can specify the commit hash in the URL: + +```console +$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch +$ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} +``` + +The `uv` approach works for vLLM `v0.6.6` and later. + +### Install the latest code using docker + Another way to access the latest code is to use the docker images: ```console From d8f54808f57344802b0a2af1f50502dc2769c7ad Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 11:43:41 +0800 Subject: [PATCH 3/8] more explanation Signed-off-by: youkaichao --- .../getting_started/installation/gpu-cuda.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index cc33aae4d36ea..081c6ee6ff081 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -22,11 +22,16 @@ $ conda create -n myenv python=3.12 -y $ conda activate myenv ``` +```{note} +[PyTorch has deprecated the conda release channel](https://github.com/pytorch/pytorch/issues/138506). If you use `conda`, please only use it to create Python environment rather than installing packages. In particular, the PyTorch installed via `conda` will statically linked `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See for more details. +``` + Or you can create a new Python environment using [uv](https://docs.astral.sh/uv/), a very fast Python environment manager. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following command: ```console $ # (Recommended) Create a new uv environment. Use `--seed` to install `pip` and `setuptools` in the environment. $ uv venv myenv --python 3.12 --seed +$ source myenv/bin/activate ``` In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations. @@ -35,7 +40,7 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I ### Install vLLM -You can install vLLM using `pip` or `uv pip`: +You can install vLLM using either `pip` or `uv pip`: ```console $ # Install vLLM with CUDA 12.1. @@ -43,10 +48,6 @@ $ pip install vllm # If you are using pip. $ uv pip install vllm # If you are using uv. ``` -```{note} -Please do not use `conda` to install `vllm`. `conda` installs `torch` with statically linked `NCCL`. This can cause issues when vLLM tries to use `NCCL`. See for more details. -``` - ````{note} As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: @@ -79,7 +80,7 @@ $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm Note that the wheels are built with Python 3.8 ABI (see [PEP 425](https://peps.python.org/pep-0425/) for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (`1.0.0.dev`) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata. Although we don't support Python 3.8 any more (because PyTorch 2.5 dropped support for Python 3.8), the wheels are still built with Python 3.8 ABI to keep the same wheel name as before. -Due to the limitation of `pip`, you have to specify the full URL of the wheel file +Due to the limitation of `pip`, you have to specify the full URL of the wheel file. ### Install the latest code using `uv` @@ -92,11 +93,11 @@ $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly If you want to access the wheels for previous commits, you can specify the commit hash in the URL: ```console -$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch +$ export VLLM_COMMIT=eb881ed006ca458b052905e33f0d16dbb428063a # use full commit hash from the main branch $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} ``` -The `uv` approach works for vLLM `v0.6.6` and later. +The `uv` approach works for vLLM `v0.6.6` and later. What's unique about `uv`, is that packages in `--extra-index-url` has [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes), while `pip` will combine packages from `--extra-index-url` and the default index, and only choose the latest version, which makes it impossible to easily install a developing version before the released version. ### Install the latest code using docker From 3b993259a6145f0b8534c3ffde2d400e504f0b7a Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 11:44:56 +0800 Subject: [PATCH 4/8] typo Signed-off-by: youkaichao --- docs/source/getting_started/installation/gpu-cuda.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 081c6ee6ff081..0b4ae0119e1e1 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -23,7 +23,7 @@ $ conda activate myenv ``` ```{note} -[PyTorch has deprecated the conda release channel](https://github.com/pytorch/pytorch/issues/138506). If you use `conda`, please only use it to create Python environment rather than installing packages. In particular, the PyTorch installed via `conda` will statically linked `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See for more details. +[PyTorch has deprecated the conda release channel](https://github.com/pytorch/pytorch/issues/138506). If you use `conda`, please only use it to create Python environment rather than installing packages. In particular, the PyTorch installed via `conda` will statically link `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See for more details. ``` Or you can create a new Python environment using [uv](https://docs.astral.sh/uv/), a very fast Python environment manager. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following command: From ad25ac5006858926fd6a39f945ffedeaf1766589 Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 11:54:33 +0800 Subject: [PATCH 5/8] polish Signed-off-by: youkaichao --- docs/source/getting_started/installation/gpu-cuda.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 0b4ae0119e1e1..fc00b5c148405 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -48,9 +48,7 @@ $ pip install vllm # If you are using pip. $ uv pip install vllm # If you are using uv. ``` -````{note} -As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. -We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: +As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: ```console $ # Install vLLM with CUDA 11.8. @@ -71,7 +69,7 @@ LLM inference is a fast-evolving field, and the latest code may contain bug fixe $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl ``` -If you want to access the wheels for previous commits, you can specify the commit hash in the URL: +If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL: ```console $ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch @@ -90,14 +88,14 @@ Another way to install the latest code is to use `uv`: $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly ``` -If you want to access the wheels for previous commits, you can specify the commit hash in the URL: +If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL: ```console $ export VLLM_COMMIT=eb881ed006ca458b052905e33f0d16dbb428063a # use full commit hash from the main branch $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} ``` -The `uv` approach works for vLLM `v0.6.6` and later. What's unique about `uv`, is that packages in `--extra-index-url` has [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes), while `pip` will combine packages from `--extra-index-url` and the default index, and only choose the latest version, which makes it impossible to easily install a developing version before the released version. +The `uv` approach works for vLLM `v0.6.6` and later, and it has an easy-to-remember command. What's unique about `uv`, is that packages in `--extra-index-url` has [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior makes it possible to install a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. By contrast, `pip` will combine packages from `--extra-index-url` and the default index, and only choose the latest version, which makes it impossible to easily install a developing version before the released version. ### Install the latest code using docker From 8699fd3dd16e9d3475096d235f8e3c8490cb167f Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 11:56:03 +0800 Subject: [PATCH 6/8] polish Signed-off-by: youkaichao --- docs/source/getting_started/installation/gpu-cuda.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index fc00b5c148405..7e0b227002fda 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -95,7 +95,7 @@ $ export VLLM_COMMIT=eb881ed006ca458b052905e33f0d16dbb428063a # use full commit $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} ``` -The `uv` approach works for vLLM `v0.6.6` and later, and it has an easy-to-remember command. What's unique about `uv`, is that packages in `--extra-index-url` has [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior makes it possible to install a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. By contrast, `pip` will combine packages from `--extra-index-url` and the default index, and only choose the latest version, which makes it impossible to easily install a developing version before the released version. +The `uv` approach works for vLLM `v0.6.6` and later and offers an easy-to-remember command. A unique feature of `uv` is that packages in `--extra-index-url` have [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior allows installing a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. In contrast, `pip` combines packages from `--extra-index-url` and the default index, choosing only the latest version, which makes it difficult to install a development version prior to the released version. ### Install the latest code using docker From ab159f2c5681b5d19e99c976d288877258cda38b Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 12:09:30 +0800 Subject: [PATCH 7/8] change example commit Signed-off-by: youkaichao --- docs/source/getting_started/installation/gpu-cuda.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 7e0b227002fda..e5cf71f28b1a2 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -91,7 +91,7 @@ $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL: ```console -$ export VLLM_COMMIT=eb881ed006ca458b052905e33f0d16dbb428063a # use full commit hash from the main branch +$ export VLLM_COMMIT=72d9c316d3f6ede485146fe5aabd4e61dbc59069 # use full commit hash from the main branch $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} ``` From d1eb6cf1a2a8ee76087261935446f9a04fb34dfc Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 7 Jan 2025 12:19:47 +0800 Subject: [PATCH 8/8] Update docs/source/getting_started/installation/gpu-cuda.md Co-authored-by: Cyrus Leung --- docs/source/getting_started/installation/gpu-cuda.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index e5cf71f28b1a2..295555b6c41f0 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -97,7 +97,7 @@ $ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} The `uv` approach works for vLLM `v0.6.6` and later and offers an easy-to-remember command. A unique feature of `uv` is that packages in `--extra-index-url` have [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior allows installing a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. In contrast, `pip` combines packages from `--extra-index-url` and the default index, choosing only the latest version, which makes it difficult to install a development version prior to the released version. -### Install the latest code using docker +### Install the latest code using `docker` Another way to access the latest code is to use the docker images: