Skip to content

Commit 1038388

Browse files
authored
[ROCm] Cleanup Dockerfile and remove outdated patch (#6482)
1 parent 1d094fd commit 1038388

File tree

3 files changed

+14
-77
lines changed

3 files changed

+14
-77
lines changed

Dockerfile.rocm

Lines changed: 1 addition & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,6 @@
11
# Default ROCm 6.1 base image
22
ARG BASE_IMAGE="rocm/pytorch:rocm6.1.2_ubuntu20.04_py3.9_pytorch_staging"
33

4-
# Tested and supported base rocm/pytorch images
5-
ARG ROCm_5_7_BASE="rocm/pytorch:rocm5.7_ubuntu20.04_py3.9_pytorch_2.0.1" \
6-
ROCm_6_0_BASE="rocm/pytorch:rocm6.0_ubuntu20.04_py3.9_pytorch_2.1.1" \
7-
ROCM_6_1_BASE="rocm/pytorch:rocm6.1.2_ubuntu20.04_py3.9_pytorch_staging"
8-
94
# Default ROCm ARCHes to build vLLM for.
105
ARG PYTORCH_ROCM_ARCH="gfx908;gfx90a;gfx942;gfx1100"
116

@@ -54,18 +49,6 @@ RUN pip install --upgrade pip
5449
RUN apt-get purge -y sccache; pip uninstall -y sccache; rm -f "$(which sccache)"
5550
# Install torch == 2.5.0 on ROCm
5651
RUN case "$(ls /opt | grep -Po 'rocm-[0-9]\.[0-9]')" in \
57-
*"rocm-5.7"*) \
58-
pip uninstall -y torch torchaudio torchvision \
59-
&& pip install --no-cache-dir --pre \
60-
torch==2.5.0.dev20240710 torchaudio==2.4.0.dev20240710 \
61-
torchvision==0.20.0.dev20240710 \
62-
--index-url https://download.pytorch.org/whl/nightly/rocm5.7;; \
63-
*"rocm-6.0"*) \
64-
pip uninstall -y torch torchaudio torchvision \
65-
&& pip install --no-cache-dir --pre \
66-
torch==2.5.0.dev20240710 torchaudio==2.4.0.dev20240710 \
67-
torchvision==0.20.0.dev20240710 \
68-
--index-url https://download.pytorch.org/whl/nightly/rocm6.0;; \
6952
*"rocm-6.1"*) \
7053
pip uninstall -y torch torchaudio torchvision \
7154
&& pip install --no-cache-dir --pre \
@@ -104,11 +87,6 @@ RUN --mount=type=cache,target=${CCACHE_DIR} \
10487
&& cd flash-attention \
10588
&& git checkout "${FA_BRANCH}" \
10689
&& git submodule update --init \
107-
&& case "$(ls /opt | grep -Po 'rocm-[0-9]\.[0-9]')" in \
108-
*"rocm-5.7"*) \
109-
export VLLM_TORCH_PATH="$(python3 -c 'import torch; print(torch.__path__[0])')" \
110-
&& patch "${VLLM_TORCH_PATH}"/utils/hipify/hipify_python.py hipify_patch.patch;; \
111-
*) ;; esac \
11290
&& GPU_ARCHS="${FA_GFX_ARCHS}" python3 setup.py bdist_wheel --dist-dir=/install; \
11391
# Create an empty directory otherwise as later build stages expect one
11492
else mkdir -p /install; \
@@ -161,12 +139,9 @@ RUN --mount=type=cache,target=${CCACHE_DIR} \
161139
--mount=type=cache,target=/root/.cache/pip \
162140
pip install -U -r requirements-rocm.txt \
163141
&& case "$(ls /opt | grep -Po 'rocm-[0-9]\.[0-9]')" in \
164-
*"rocm-6.0"*) \
165-
patch /opt/rocm/include/hip/amd_detail/amd_hip_bf16.h rocm_patch/rocm_bf16.patch;; \
166142
*"rocm-6.1"*) \
167143
# Bring in upgrades to HIP graph earlier than ROCm 6.2 for vLLM
168-
wget -N https://github.com/ROCm/vllm/raw/fa78403/rocm_patch/libamdhip64.so.6 -P rocm_patch \
169-
&& cp rocm_patch/libamdhip64.so.6 /opt/rocm/lib/libamdhip64.so.6 \
144+
wget -N https://github.com/ROCm/vllm/raw/fa78403/rocm_patch/libamdhip64.so.6 -P /opt/rocm/lib \
170145
# Prevent interference if torch bundles its own HIP runtime
171146
&& rm -f "$(python3 -c 'import torch; print(torch.__path__[0])')"/lib/libamdhip64.so* || true;; \
172147
*) ;; esac \

docs/source/getting_started/amd-installation.rst

Lines changed: 13 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,15 @@
33
Installation with ROCm
44
======================
55

6-
vLLM supports AMD GPUs with ROCm 5.7 and 6.0.
6+
vLLM supports AMD GPUs with ROCm 6.1.
77

88
Requirements
99
------------
1010

1111
* OS: Linux
1212
* Python: 3.8 -- 3.11
1313
* GPU: MI200s (gfx90a), MI300 (gfx942), Radeon RX 7900 series (gfx1100)
14-
* ROCm 6.0 and ROCm 5.7
14+
* ROCm 6.1
1515

1616
Installation options:
1717

@@ -27,10 +27,10 @@ You can build and install vLLM from source.
2727

2828
First, build a docker image from `Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/Dockerfile.rocm>`_ and launch a docker container from the image.
2929

30-
`Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/Dockerfile.rocm>`_ uses ROCm 6.0 by default, but also supports ROCm 5.7.
30+
`Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/Dockerfile.rocm>`_ uses ROCm 6.1 by default, but also supports ROCm 5.7 and 6.0 in older vLLM branches.
3131
It provides flexibility to customize the build of docker image using the following arguments:
3232

33-
* `BASE_IMAGE`: specifies the base image used when running ``docker build``, specifically the PyTorch on ROCm base image. We have tested ROCm 5.7 and ROCm 6.0. The default is `rocm/pytorch:rocm6.0_ubuntu20.04_py3.9_pytorch_2.1.1`
33+
* `BASE_IMAGE`: specifies the base image used when running ``docker build``, specifically the PyTorch on ROCm base image.
3434
* `BUILD_FA`: specifies whether to build CK flash-attention. The default is 1. For `Radeon RX 7900 series (gfx1100) <https://rocm.docs.amd.com/projects/radeon/en/latest/index.html>`_, this should be set to 0 before flash-attention supports this target.
3535
* `FX_GFX_ARCHS`: specifies the GFX architecture that is used to build CK flash-attention, for example, `gfx90a;gfx942` for MI200 and MI300. The default is `gfx90a;gfx942`
3636
* `FA_BRANCH`: specifies the branch used to build the CK flash-attention in `ROCm's flash-attention repo <https://github.com/ROCmSoftwarePlatform/flash-attention>`_. The default is `ae7928c`
@@ -39,24 +39,17 @@ It provides flexibility to customize the build of docker image using the followi
3939
Their values can be passed in when running ``docker build`` with ``--build-arg`` options.
4040

4141

42-
To build vllm on ROCm 6.0 for MI200 and MI300 series, you can use the default:
42+
To build vllm on ROCm 6.1 for MI200 and MI300 series, you can use the default:
4343

4444
.. code-block:: console
4545
46-
$ docker build -f Dockerfile.rocm -t vllm-rocm .
46+
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.rocm -t vllm-rocm .
4747
48-
To build vllm on ROCm 6.0 for Radeon RX7900 series (gfx1100), you should specify ``BUILD_FA`` as below:
48+
To build vllm on ROCm 6.1 for Radeon RX7900 series (gfx1100), you should specify ``BUILD_FA`` as below:
4949

5050
.. code-block:: console
5151
52-
$ docker build --build-arg BUILD_FA="0" -f Dockerfile.rocm -t vllm-rocm .
53-
54-
To build docker image for vllm on ROCm 5.7, you can specify ``BASE_IMAGE`` as below:
55-
56-
.. code-block:: console
57-
58-
$ docker build --build-arg BASE_IMAGE="rocm/pytorch:rocm5.7_ubuntu22.04_py3.10_pytorch_2.0.1" \
59-
-f Dockerfile.rocm -t vllm-rocm .
52+
$ DOCKER_BUILDKIT=1 docker build --build-arg BUILD_FA="0" -f Dockerfile.rocm -t vllm-rocm .
6053
6154
To run the above docker image ``vllm-rocm``, use the below command:
6255

@@ -85,25 +78,12 @@ Option 2: Build from source
8578
0. Install prerequisites (skip if you are already in an environment/docker with the following installed):
8679

8780
- `ROCm <https://rocm.docs.amd.com/en/latest/deploy/linux/index.html>`_
88-
- `Pytorch <https://pytorch.org/>`_
81+
- `PyTorch <https://pytorch.org/>`_
8982
- `hipBLAS <https://rocm.docs.amd.com/projects/hipBLAS/en/latest/install.html>`_
9083

91-
For installing PyTorch, you can start from a fresh docker image, e.g, `rocm/pytorch:rocm6.1.2_ubuntu20.04_py3.9_pytorch_staging`, `rocm/pytorch:rocm6.0_ubuntu20.04_py3.9_pytorch_2.1.1`, `rocm/pytorch-nightly`.
92-
93-
Alternatively, you can install pytorch using pytorch wheels. You can check Pytorch installation guild in Pytorch `Getting Started <https://pytorch.org/get-started/locally/>`_
94-
95-
For rocm6.0:
96-
97-
.. code-block:: console
98-
99-
$ pip3 install torch --index-url https://download.pytorch.org/whl/rocm6.0
100-
101-
102-
For rocm5.7:
103-
104-
.. code-block:: console
84+
For installing PyTorch, you can start from a fresh docker image, e.g, `rocm/pytorch:rocm6.1.2_ubuntu20.04_py3.9_pytorch_staging`, `rocm/pytorch-nightly`.
10585

106-
$ pip install torch --index-url https://download.pytorch.org/whl/rocm5.7
86+
Alternatively, you can install PyTorch using PyTorch wheels. You can check PyTorch installation guild in PyTorch `Getting Started <https://pytorch.org/get-started/locally/>`_
10787

10888

10989
1. Install `Triton flash attention for ROCm <https://github.com/ROCm/triton>`_
@@ -115,8 +95,6 @@ Install ROCm's Triton flash attention (the default triton-mlir branch) following
11595
Install ROCm's flash attention (v2.0.4) following the instructions from `ROCm/flash-attention <https://github.com/ROCm/flash-attention/tree/flash_attention_for_rocm#amd-gpurocm-support>`_
11696

11797
.. note::
118-
- If you are using rocm5.7 with pytorch 2.1.0 onwards, you don't need to apply the `hipify_python.patch`. You can build the ROCm flash attention directly.
119-
- If you fail to install `ROCm/flash-attention`, try cloning from the commit `6fd2f8e572805681cd67ef8596c7e2ce521ed3c6`.
12098
- ROCm's Flash-attention-2 (v2.0.4) does not support sliding windows attention.
12199
- You might need to downgrade the "ninja" version to 1.10 it is not used when compiling flash-attention-2 (e.g. `pip install ninja==1.10.2.4`)
122100

@@ -131,7 +109,6 @@ Install ROCm's flash attention (v2.0.4) following the instructions from `ROCm/fl
131109
132110
.. tip::
133111

134-
- You may need to turn on the ``--enforce-eager`` flag if you experience process hang when running the `benchmark_thoughput.py` script to test your installation.
135112
- Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers.
136-
- To use CK flash-attention, please use this flag ``export VLLM_USE_TRITON_FLASH_ATTN=0`` to turn off triton flash attention.
137-
- The ROCm version of pytorch, ideally, should match the ROCm driver version.
113+
- To use CK flash-attention or PyTorch naive attention, please use this flag ``export VLLM_USE_TRITON_FLASH_ATTN=0`` to turn off triton flash attention.
114+
- The ROCm version of PyTorch, ideally, should match the ROCm driver version.

rocm_patch/rocm_bf16.patch

Lines changed: 0 additions & 15 deletions
This file was deleted.

0 commit comments

Comments
 (0)