Skip to content

Commit e6b88ce

Browse files
Jeffwanfialhocoelho
authored andcommitted
[Doc] Fix the lora adapter path in server startup script (vllm-project#6230)
1 parent c2d4696 commit e6b88ce

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

docs/source/models/lora.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,10 @@ LoRA adapted models can also be served with the Open-AI compatible vLLM server.
6464
python -m vllm.entrypoints.openai.api_server \
6565
--model meta-llama/Llama-2-7b-hf \
6666
--enable-lora \
67-
--lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
67+
--lora-modules sql-lora=$HOME/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c/
68+
69+
.. note::
70+
The commit ID `0dfa347e8877a4d4ed19ee56c140fa518470028c` may change over time. Please check the latest commit ID in your environment to ensure you are using the correct one.
6871

6972
The server entrypoint accepts all other LoRA configuration parameters (``max_loras``, ``max_lora_rank``, ``max_cpu_loras``,
7073
etc.), which will apply to all forthcoming requests. Upon querying the ``/models`` endpoint, we should see our LoRA along

0 commit comments

Comments
 (0)