|
1 | 1 | # Introduction
|
2 |
| -This document outlines the benchmarking process for vllm-ascend, designed to evaluate its performance under various workloads. The primary goal is to help developers assess whether their pull requests improve or degrade vllm-ascend's performance.To maintain consistency with the vllm community, we have reused the vllm community [benchmark](https://github.com/vllm-project/vllm/tree/main/benchmarks) script. |
| 2 | +This document outlines the benchmarking methodology for vllm-ascend, aimed at evaluating the performance under a variety of workloads. The primary goal is to help developers assess whether their pull requests improve or degrade vllm-ascend's performance. To maintain alignment with vLLM, we use the [benchmark](https://github.com/vllm-project/vllm/tree/main/benchmarks) script provided by the vllm project. |
| 3 | + |
3 | 4 | # Overview
|
4 | 5 | **Benchmarking Coverage**: We measure latency, throughput, and fixed-QPS serving on the Atlas800I A2 (see [quick_start](../docs/source/quick_start.md) to learn more supported devices list), with different models(coming soon).
|
5 | 6 | - Latency tests
|
6 | 7 | - Input length: 32 tokens.
|
7 | 8 | - Output length: 128 tokens.
|
8 | 9 | - Batch size: fixed (8).
|
9 |
| - - Models: llama-3.1 8B. |
| 10 | + - Models: Meta-Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct. |
10 | 11 | - Evaluation metrics: end-to-end latency (mean, median, p99).
|
11 | 12 |
|
12 | 13 | - Throughput tests
|
13 | 14 | - Input length: randomly sample 200 prompts from ShareGPT dataset (with fixed random seed).
|
14 | 15 | - Output length: the corresponding output length of these 200 prompts.
|
15 | 16 | - Batch size: dynamically determined by vllm to achieve maximum throughput.
|
16 |
| - - Models: llama-3.1 8B . |
| 17 | + - Models: Meta-Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct. |
17 | 18 | - Evaluation metrics: throughput.
|
18 | 19 | - Serving tests
|
19 | 20 | - Input length: randomly sample 200 prompts from ShareGPT dataset (with fixed random seed).
|
20 | 21 | - Output length: the corresponding output length of these 200 prompts.
|
21 | 22 | - Batch size: dynamically determined by vllm and the arrival pattern of the requests.
|
22 | 23 | - **Average QPS (query per second)**: 1, 4, 16 and inf. QPS = inf means all requests come at once. For other QPS values, the arrival time of each query is determined using a random Poisson process (with fixed random seed).
|
23 |
| - - Models: llama-3.1 8B. |
| 24 | + - Models: Meta-Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct. |
24 | 25 | - Evaluation metrics: throughput, TTFT (time to the first token, with mean, median and p99), ITL (inter-token latency, with mean, median and p99).
|
25 | 26 |
|
26 |
| -**Benchmarking Duration**: about 800senond for single model. |
| 27 | +**Benchmarking Duration**: about 800 senond for single model. |
27 | 28 |
|
28 | 29 |
|
29 | 30 | # Quick Use
|
30 | 31 | ## Prerequisites
|
31 | 32 | Before running the benchmarks, ensure the following:
|
| 33 | + |
32 | 34 | - vllm and vllm-ascend are installed and properly set up in an NPU environment, as these scripts are specifically designed for NPU devices.
|
| 35 | + |
33 | 36 | - Install necessary dependencies for benchmarks:
|
34 | 37 | ```
|
35 | 38 | pip install -r benchmarks/requirements-bench.txt
|
36 | 39 | ```
|
37 | 40 |
|
38 |
| -- Models and datasets are cached locally to accelerate execution. Modify the paths in the JSON files located in benchmarks/tests accordingly. feel free to add your own models and parameters in the JSON to run your customized benchmarks. |
| 41 | +- For performance benchmark, it is recommended to set the [load-format](https://github.com/vllm-project/vllm-ascend/blob/5897dc5bbe321ca90c26225d0d70bff24061d04b/benchmarks/tests/latency-tests.json#L7) as `dummy`, It will construct random weights based on the passed model without downloading the weights from internet, which can greatly reduce the benchmark time. feel free to add your own models and parameters in the JSON to run your customized benchmarks. |
39 | 42 |
|
40 | 43 | ## Run benchmarks
|
41 | 44 | The provided scripts automatically execute performance tests for serving, throughput, and latency. To start the benchmarking process, run command in the vllm-ascend root directory:
|
|
0 commit comments