From 9d88f9fd137d2e257a67badf3dcc89fd62f9c5e1 Mon Sep 17 00:00:00 2001 From: Scofflawvii <43871305+Scofflawvii@users.noreply.github.com> Date: Mon, 17 Mar 2025 10:59:03 +1000 Subject: [PATCH] Update README.md For llama.cpp HIPBLAS compile instructions From llama.cpp repo -DGGML_HIP=on is now the required CMAKE flag for HIPBlas/ROCM. --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e00456580..cebe3095e 100644 --- a/README.md +++ b/README.md @@ -175,10 +175,10 @@ pip install llama-cpp-python \
hipBLAS (ROCm) -To install with hipBLAS / ROCm support for AMD cards, set the `GGML_HIPBLAS=on` environment variable before installing: +To install with hipBLAS / ROCm support for AMD cards, set the `GGML_HIP=on` environment variable before installing: ```bash -CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python +CMAKE_ARGS="-DGGML_HIP=on" pip install llama-cpp-python ```