Skip to content

Commit d5401a0

Browse files
authored
[DOC] update modelslim version (#908)
1. update modelslim version to fix deepseek related issues 2. add note for "--quantization ascend" Signed-off-by: 22dimensions <waitingwind@foxmail.com>
1 parent 5cf9ff1 commit d5401a0

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

docs/source/tutorials/multi_npu_quantization.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ see https://www.modelscope.cn/models/vllm-ascend/QwQ-32B-W8A8
3636

3737
```bash
3838
# (Optional)This tag is recommended and has been verified
39-
git clone https://gitee.com/ascend/msit -b modelslim-VLLM-8.1.RC1.b020
39+
git clone https://gitee.com/ascend/msit -b modelslim-VLLM-8.1.RC1.b020_001
4040

4141
cd msit/msmodelslim
4242
# Install by run this script
@@ -68,6 +68,10 @@ The converted model files looks like:
6868
```
6969

7070
Run the following script to start the vLLM server with quantize model:
71+
72+
:::{note}
73+
The value "ascend" for "--quantization" argument will be supported after [a specific PR](https://github.com/vllm-project/vllm-ascend/pull/877) is merged and released, you can cherry-pick this commit for now.
74+
:::
7175
```bash
7276
vllm serve /home/models/QwQ-32B-w8a8 --tensor-parallel-size 4 --served-model-name "qwq-32b-w8a8" --max-model-len 4096 --quantization ascend
7377
```

0 commit comments

Comments
 (0)