Skip to content

Commit 6193ba6

Browse files
authored
[CI] add codespell CI and fix format.sh (vllm-project#827)
1. Fix format check error to make format.sh work 2. Add codespell check CI 3. Add the missing required package for vllm-ascend. Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 5998704 commit 6193ba6

File tree

11 files changed

+60
-8
lines changed

11 files changed

+60
-8
lines changed

.github/workflows/codespell.yml

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
#
2+
# Copyright 2023 The vLLM team.
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
# Adapted from vllm-project/vllm/blob/main/.github
16+
#
17+
18+
name: codespell
19+
20+
on:
21+
pull_request:
22+
branches:
23+
- 'main'
24+
- '*-dev'
25+
26+
jobs:
27+
codespell:
28+
runs-on: ubuntu-latest
29+
strategy:
30+
matrix:
31+
python-version: ["3.12"]
32+
steps:
33+
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
34+
- name: Set up Python ${{ matrix.python-version }}
35+
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
36+
with:
37+
python-version: ${{ matrix.python-version }}
38+
- name: Install dependencies
39+
run: |
40+
python -m pip install --upgrade pip
41+
pip install -r requirements-lint.txt
42+
- name: Run codespell check
43+
run: |
44+
CODESPELL_EXCLUDES=('--skip' 'tests/prompts/**,./benchmarks/sonnet.txt,*tests/lora/data/**,build/**,./vllm_ascend.egg-info/**')
45+
CODESPELL_IGNORE_WORDS=('-L' 'CANN,cann,NNAL,nnal,ASCEND,ascend,EnQue')
46+
47+
codespell --toml pyproject.toml "${CODESPELL_EXCLUDES[@]}" "${CODESPELL_IGNORE_WORDS[@]}"

docs/source/developer_guide/versioning_policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Usually, each minor version of vLLM (such as 0.7) will correspond to a vLLM Asce
7474

7575
For main branch, vLLM Ascend should works with vLLM main branch and latest 1 or 2 release version. So to ensure the backward compatibility, we will do the following:
7676
- Both main branch and target vLLM release is tested by Ascend E2E CI. For example, currently, vLLM main branch and vLLM 0.8.4 are tested now.
77-
- For code changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. In this case, vLLM Ascend introduced a version check machinism inner the code. It'll check the version of installed vLLM pacakge first to decide which code logic to use. If users hit the `InvalidVersion` error, it sometimes means that they have installed an dev/editable version of vLLM package. In this case, we provide the env variable `VLLM_VERSION` to let users specify the version of vLLM package to use.
77+
- For code changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. In this case, vLLM Ascend introduced a version check machinism inner the code. It'll check the version of installed vLLM package first to decide which code logic to use. If users hit the `InvalidVersion` error, it sometimes means that they have installed an dev/editable version of vLLM package. In this case, we provide the env variable `VLLM_VERSION` to let users specify the version of vLLM package to use.
7878
- For documentation changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. Note should be added if there are any breaking changes.
7979

8080
## Document Branch Policy

docs/source/faqs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Currently, only 1P1D is supported by vllm. For vllm-ascend, it'll be done by [th
8484

8585
### 10. Does vllm-ascend support quantization method?
8686

87-
Currently, w8a8 quantization is already supported by vllm-ascend originally on v0.8.4rc2 or heigher, If you're using vllm 0.7.3 version, w8a8 quantization is supporeted with the integration of vllm-ascend and mindie-turbo, please use `pip install vllm-ascend[mindie-turbo]`.
87+
Currently, w8a8 quantization is already supported by vllm-ascend originally on v0.8.4rc2 or higher, If you're using vllm 0.7.3 version, w8a8 quantization is supporeted with the integration of vllm-ascend and mindie-turbo, please use `pip install vllm-ascend[mindie-turbo]`.
8888

8989
### 11. How to run w8a8 DeepSeek model?
9090

docs/source/tutorials/multi_npu_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Run docker container:
44
:::{note}
5-
w8a8 quantization feature is supported by v0.8.4rc2 or highter
5+
w8a8 quantization feature is supported by v0.8.4rc2 or higher
66
:::
77

88
```{code-block} bash

docs/source/user_guide/release_notes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ This is the second release candidate of v0.8.4 for vllm-ascend. Please follow th
3333
- DeepSeek V3/R1 works with DP, TP and MTP now. Please note that it's still in experimental status. Let us know if you hit any problem. [#429](https://github.com/vllm-project/vllm-ascend/pull/429) [#585](https://github.com/vllm-project/vllm-ascend/pull/585) [#626](https://github.com/vllm-project/vllm-ascend/pull/626) [#636](https://github.com/vllm-project/vllm-ascend/pull/636) [#671](https://github.com/vllm-project/vllm-ascend/pull/671)
3434

3535
### Core
36-
- ACLGraph feature is supported with V1 engine now. It's disabled by default because this feature rely on CANN 8.1 release. We'll make it avaiable by default in the next release [#426](https://github.com/vllm-project/vllm-ascend/pull/426)
37-
- Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users don't need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automaticlly. [#661](https://github.com/vllm-project/vllm-ascend/pull/661)
36+
- ACLGraph feature is supported with V1 engine now. It's disabled by default because this feature rely on CANN 8.1 release. We'll make it available by default in the next release [#426](https://github.com/vllm-project/vllm-ascend/pull/426)
37+
- Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users don't need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automatically. [#661](https://github.com/vllm-project/vllm-ascend/pull/661)
3838

3939
### Other
4040
- MiniCPM model works now. [#645](https://github.com/vllm-project/vllm-ascend/pull/645)

examples/disaggregated_prefill/dp_proxy.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ def metadata_collect_trigger(poller, router_socket):
105105
start_time = time.time()
106106
socks = dict(poller.poll(timeout=500)) # timeout in 500ms
107107
if socks:
108-
logger.debug("receive socks from moniter threads: ", socks)
108+
logger.debug("receive socks from monitor threads: ", socks)
109109
if router_socket in socks:
110110
messages = router_socket.recv_multipart()
111111
try:

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
requires = [
44
"cmake>=3.26",
55
"decorator",
6+
"einops",
67
"numpy<2.0.0",
78
"packaging",
89
"pip",

requirements-dev.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
-r requirements-lint.txt
22
-r requirements.txt
33
modelscope
4+
openai
45
pytest >= 6.0
56
pytest-asyncio
67
lm-eval
78
ray
89
types-jsonschema
10+
xgrammar
11+
zmq

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Should be mirrored in pyporject.toml
22
cmake>=3.26
33
decorator
4+
einops
45
numpy<2.0.0
56
packaging
67
pip

vllm_ascend/distributed/kv_transfer/simple_buffer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ def drop_select(
199199
hidden = hidden.view(num_tokens, self.hidden_size)
200200
except Exception as e:
201201
logger.warning(
202-
f"Faile to receive kv cache and hidden states of request: {orig_req_id} "
202+
f"Fail to receive kv cache and hidden states of request: {orig_req_id} "
203203
f"Error is {str(e)}")
204204
return [None, None, None, None]
205205

vllm_ascend/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ def vllm_version_is(target_vllm_version: str):
113113
raise ValueError(
114114
f"Invalid vllm version {vllm_version} found. A dev version of vllm "
115115
"is installed probably. Set the environment variable VLLM_VERSION "
116-
"to control it by hand. And please make sure the vaule follows the "
116+
"to control it by hand. And please make sure the value follows the "
117117
"format of x.y.z.")
118118

119119

0 commit comments

Comments
 (0)