Skip to content

Commit a7353bb

Browse files
authored
Refine performance directory (#1017)
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
1 parent aa314f6 commit a7353bb

32 files changed

+894
-0
lines changed
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
# ChatQnA Benchmarking
2+
3+
This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive [benchmarking tool](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md) that enables throughput analysis to assess inference performance.
4+
5+
By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.
6+
7+
## Purpose
8+
9+
We aim to run these benchmarks and share them with the OPEA community for three primary reasons:
10+
11+
- To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
12+
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
13+
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.
14+
15+
## Metrics
16+
17+
The benchmark will report the below metrics, including:
18+
19+
- Number of Concurrent Requests
20+
- End-to-End Latency: P50, P90, P99 (in milliseconds)
21+
- End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
22+
- Average Next Token Latency (in milliseconds)
23+
- Average Token Latency (in milliseconds)
24+
- Requests Per Second (RPS)
25+
- Output Tokens Per Second
26+
- Input Tokens Per Second
27+
28+
Results will be displayed in the terminal and saved as CSV file named `1_stats.csv` for easy export to spreadsheets.
29+
30+
## Table of Contents
31+
32+
- [Deployment](#deployment)
33+
- [Prerequisites](#prerequisites)
34+
- [Deployment Scenarios](#deployment-scenarios)
35+
- [Case 1: Baseline Deployment with Rerank](#case-1-baseline-deployment-with-rerank)
36+
- [Case 2: Baseline Deployment without Rerank](#case-2-baseline-deployment-without-rerank)
37+
- [Case 3: Tuned Deployment with Rerank](#case-3-tuned-deployment-with-rerank)
38+
- [Benchmark](#benchmark)
39+
- [Test Configurations](#test-configurations)
40+
- [Test Steps](#test-steps)
41+
- [Upload Retrieval File](#upload-retrieval-file)
42+
- [Run Benchmark Test](#run-benchmark-test)
43+
- [Data collection](#data-collection)
44+
- [Teardown](#teardown)
45+
46+
## Deployment
47+
48+
### Prerequisites
49+
50+
- Kubernetes installation: Use [kubespray](https://github.com/opea-project/docs/blob/main/guide/installation/k8s_install/k8s_install_kubespray.md) or other official Kubernetes installation guides.
51+
- Helm installation: Follow the [Helm documentation](https://helm.sh/docs/intro/install/#helm) to install Helm.
52+
- Setup Hugging Face Token
53+
54+
To access models and APIs from Hugging Face, set your token as environment variable.
55+
```bash
56+
export HF_TOKEN="insert-your-huggingface-token-here"
57+
```
58+
- Prepare Shared Models (Optional but Strongly Recommended)
59+
60+
Downloading models simultaneously to multiple nodes in your cluster can overload resources such as network bandwidth, memory and storage. To prevent resource exhaustion, it's recommended to preload the models in advance.
61+
```bash
62+
pip install -U "huggingface_hub[cli]"
63+
sudo mkdir -p /mnt/models
64+
sudo chmod 777 /mnt/models
65+
huggingface-cli download --cache-dir /mnt/models Intel/neural-chat-7b-v3-3
66+
export MODEL_DIR=/mnt/models
67+
```
68+
Once the models are downloaded, you can consider the following methods for sharing them across nodes:
69+
- Persistent Volume Claim (PVC): This is the recommended approach for production setups. For more details on using PVC, refer to [PVC](https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/README.md#using-persistent-volume).
70+
- Local Host Path: For simpler testing, ensure that each node involved in the deployment follows the steps above to locally prepare the models. After preparing the models, use `--set global.modelUseHostPath=${MODELDIR}` in the deployment command.
71+
72+
- Add OPEA Helm Repository:
73+
```bash
74+
python deploy.py --add-repo
75+
```
76+
- Label Nodes
77+
```base
78+
python deploy.py --add-label --num-nodes 2
79+
```
80+
81+
### Deployment Scenarios
82+
83+
The example below are based on a two-node setup. You can adjust the number of nodes by using the `--num-nodes` option.
84+
85+
By default, these commands use the `default` namespace. To specify a different namespace, use the `--namespace` flag with deploy, uninstall, and kubernetes command. Additionally, update the `namespace` field in `benchmark.yaml` before running the benchmark test.
86+
87+
For additional configuration options, run `python deploy.py --help`
88+
89+
#### Case 1: Baseline Deployment with Rerank
90+
91+
Deploy Command (with node number, Hugging Face token, model directory specified):
92+
```bash
93+
python deploy.py --hf-token $HF_TOKEN --model-dir $MODEL_DIR --num-nodes 2 --with-rerank
94+
```
95+
Uninstall Command:
96+
```bash
97+
python deploy.py --uninstall
98+
```
99+
100+
#### Case 2: Baseline Deployment without Rerank
101+
102+
```bash
103+
python deploy.py --hftoken $HFTOKEN --modeldir $MODELDIR --num-nodes 2
104+
```
105+
#### Case 3: Tuned Deployment with Rerank
106+
107+
```bash
108+
python deploy.py --hftoken $HFTOKEN --modeldir $MODELDIR --num-nodes 2 --with-rerank --tuned
109+
```
110+
111+
## Benchmark
112+
113+
### Test Configurations
114+
115+
| Key | Value |
116+
| -------- | ------- |
117+
| Workload | ChatQnA |
118+
| Tag | V1.1 |
119+
120+
Models configuration
121+
| Key | Value |
122+
| ---------- | ------------------ |
123+
| Embedding | BAAI/bge-base-en-v1.5 |
124+
| Reranking | BAAI/bge-reranker-base |
125+
| Inference | Intel/neural-chat-7b-v3-3 |
126+
127+
Benchmark parameters
128+
| Key | Value |
129+
| ---------- | ------------------ |
130+
| LLM input tokens | 1024 |
131+
| LLM output tokens | 128 |
132+
133+
Number of test requests for different scheduled node number:
134+
| Node count | Concurrency | Query number |
135+
| ----- | -------- | -------- |
136+
| 1 | 128 | 640 |
137+
| 2 | 256 | 1280 |
138+
| 4 | 512 | 2560 |
139+
140+
More detailed configuration can be found in configuration file [benchmark.yaml](./benchmark.yaml).
141+
142+
### Test Steps
143+
144+
Use `kubectl get pods` to confirm that all pods are `READY` before starting the test.
145+
146+
#### Upload Retrieval File
147+
148+
Before testing, upload a specified file to make sure the llm input have the token length of 1k.
149+
150+
Get files:
151+
152+
```bash
153+
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file_no_rerank.txt
154+
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file.txt
155+
```
156+
157+
Retrieve the `ClusterIP` of the `chatqna-data-prep` service.
158+
159+
```bash
160+
kubectl get svc
161+
```
162+
Expected output:
163+
```log
164+
chatqna-data-prep ClusterIP xx.xx.xx.xx <none> 6007/TCP 51m
165+
```
166+
167+
Use the following `cURL` command to upload file:
168+
169+
```bash
170+
cd GenAIEval/evals/benchmark/data
171+
# RAG with Rerank
172+
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
173+
-H "Content-Type: multipart/form-data" \
174+
-F "files=@./upload_file.txt"
175+
# RAG without Rerank
176+
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
177+
-H "Content-Type: multipart/form-data" \
178+
-F "files=@./upload_file_no_rerank.txt"
179+
```
180+
181+
#### Run Benchmark Test
182+
183+
Run the benchmark test using:
184+
```bash
185+
bash benchmark.sh -n 2
186+
```
187+
The `-n` argument specifies the number of test nodes. Required dependencies will be automatically installed when running the benchmark for the first time.
188+
189+
#### Data collection
190+
191+
All the test results will come to the folder `GenAIEval/evals/benchmark/benchmark_output`.
192+
193+
## Teardown
194+
195+
After completing the benchmark, use the following commands to clean up the environment:
196+
197+
Remove Node Labels:
198+
```base
199+
python deploy.py --delete-label
200+
```
201+
Delete the OPEA Helm Repository:
202+
```bash
203+
python deploy.py --delete-repo
204+
```
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
#!/bin/bash
2+
3+
# Copyright (C) 2024 Intel Corporation
4+
# SPDX-License-Identifier: Apache-2.0
5+
6+
deployment_type="k8s"
7+
node_number=1
8+
service_port=8888
9+
query_per_node=640
10+
11+
benchmark_tool_path="$(pwd)/GenAIEval"
12+
13+
usage() {
14+
echo "Usage: $0 [-d deployment_type] [-n node_number] [-i service_ip] [-p service_port]"
15+
echo " -d deployment_type ChatQnA deployment type, select between k8s and docker (default: k8s)"
16+
echo " -n node_number Test node number, required only for k8s deployment_type, (default: 1)"
17+
echo " -i service_ip chatqna service ip, required only for docker deployment_type"
18+
echo " -p service_port chatqna service port, required only for docker deployment_type, (default: 8888)"
19+
exit 1
20+
}
21+
22+
while getopts ":d:n:i:p:" opt; do
23+
case ${opt} in
24+
d )
25+
deployment_type=$OPTARG
26+
;;
27+
n )
28+
node_number=$OPTARG
29+
;;
30+
i )
31+
service_ip=$OPTARG
32+
;;
33+
p )
34+
service_port=$OPTARG
35+
;;
36+
\? )
37+
echo "Invalid option: -$OPTARG" 1>&2
38+
usage
39+
;;
40+
: )
41+
echo "Invalid option: -$OPTARG requires an argument" 1>&2
42+
usage
43+
;;
44+
esac
45+
done
46+
47+
if [[ "$deployment_type" == "docker" && -z "$service_ip" ]]; then
48+
echo "Error: service_ip is required for docker deployment_type" 1>&2
49+
usage
50+
fi
51+
52+
if [[ "$deployment_type" == "k8s" && ( -n "$service_ip" || -n "$service_port" ) ]]; then
53+
echo "Warning: service_ip and service_port are ignored for k8s deployment_type" 1>&2
54+
fi
55+
56+
function main() {
57+
if [[ ! -d ${benchmark_tool_path} ]]; then
58+
echo "Benchmark tool not found, setting up..."
59+
setup_env
60+
fi
61+
run_benchmark
62+
}
63+
64+
function setup_env() {
65+
git clone https://github.com/opea-project/GenAIEval.git
66+
pushd ${benchmark_tool_path}
67+
python3 -m venv stress_venv
68+
source stress_venv/bin/activate
69+
pip install -r requirements.txt
70+
popd
71+
}
72+
73+
function run_benchmark() {
74+
source ${benchmark_tool_path}/stress_venv/bin/activate
75+
export DEPLOYMENT_TYPE=${deployment_type}
76+
export SERVICE_IP=${service_ip:-"None"}
77+
export SERVICE_PORT=${service_port:-"None"}
78+
if [[ -z $USER_QUERIES ]]; then
79+
user_query=$((query_per_node*node_number))
80+
export USER_QUERIES="[${user_query}, ${user_query}, ${user_query}, ${user_query}]"
81+
echo "USER_QUERIES not configured, setting to: ${USER_QUERIES}."
82+
fi
83+
export WARMUP=$(echo $USER_QUERIES | sed -e 's/[][]//g' -e 's/,.*//')
84+
if [[ -z $WARMUP ]]; then export WARMUP=0; fi
85+
if [[ -z $TEST_OUTPUT_DIR ]]; then
86+
if [[ $DEPLOYMENT_TYPE == "k8s" ]]; then
87+
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/node_${node_number}"
88+
else
89+
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/docker"
90+
fi
91+
echo "TEST_OUTPUT_DIR not configured, setting to: ${TEST_OUTPUT_DIR}."
92+
fi
93+
94+
envsubst < ./benchmark.yaml > ${benchmark_tool_path}/evals/benchmark/benchmark.yaml
95+
cd ${benchmark_tool_path}/evals/benchmark
96+
python benchmark.py
97+
}
98+
99+
main
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
test_suite_config: # Overall configuration settings for the test suite
5+
examples: ["chatqna"] # The specific test cases being tested, e.g., chatqna, codegen, codetrans, faqgen, audioqna, visualqna
6+
deployment_type: ${DEPLOYMENT_TYPE} # Default is "k8s", can also be "docker"
7+
service_ip: ${SERVICE_IP} # Leave as None for k8s, specify for Docker
8+
service_port: ${SERVICE_PORT} # Leave as None for k8s, specify for Docker
9+
warm_ups: ${WARMUP} # Number of test requests for warm-up
10+
run_time: 60m # The max total run time for the test suite
11+
seed: # The seed for all RNGs
12+
user_queries: ${USER_QUERIES} # Number of test requests at each concurrency level
13+
query_timeout: 120 # Number of seconds to wait for a simulated user to complete any executing task before exiting. 120 sec by defeult.
14+
random_prompt: false # Use random prompts if true, fixed prompts if false
15+
collect_service_metric: false # Collect service metrics if true, do not collect service metrics if false
16+
data_visualization: false # Generate data visualization if true, do not generate data visualization if false
17+
llm_model: "Intel/neural-chat-7b-v3-3" # The LLM model used for the test
18+
test_output_dir: "${TEST_OUTPUT_DIR}" # The directory to store the test output
19+
load_shape: # Tenant concurrency pattern
20+
name: constant # poisson or constant(locust default load shape)
21+
params: # Loadshape-specific parameters
22+
constant: # Constant load shape specific parameters, activate only if load_shape.name is constant
23+
concurrent_level: 5 # If user_queries is specified, concurrent_level is target number of requests per user. If not, it is the number of simulated users
24+
# arrival_rate: 1.0 # Request arrival rate. If set, concurrent_level will be overridden, constant load will be generated based on arrival-rate
25+
poisson: # Poisson load shape specific parameters, activate only if load_shape.name is poisson
26+
arrival_rate: 1.0 # Request arrival rate
27+
namespace: "my-chatqna"
28+
29+
test_cases:
30+
chatqna:
31+
embedding:
32+
run_test: false
33+
service_name: "chatqna-embedding-usvc" # Replace with your service name
34+
embedserve:
35+
run_test: false
36+
service_name: "chatqna-tei" # Replace with your service name
37+
retriever:
38+
run_test: false
39+
service_name: "chatqna-retriever-usvc" # Replace with your service name
40+
parameters:
41+
search_type: "similarity"
42+
k: 4
43+
fetch_k: 20
44+
lambda_mult: 0.5
45+
score_threshold: 0.2
46+
reranking:
47+
run_test: false
48+
service_name: "chatqna-reranking-usvc" # Replace with your service name
49+
parameters:
50+
top_n: 1
51+
rerankserve:
52+
run_test: false
53+
service_name: "chatqna-teirerank" # Replace with your service name
54+
llm:
55+
run_test: false
56+
service_name: "chatqna-llm-uservice" # Replace with your service name
57+
parameters:
58+
max_tokens: 128
59+
temperature: 0.01
60+
top_k: 10
61+
top_p: 0.95
62+
repetition_penalty: 1.03
63+
streaming: true
64+
llmserve:
65+
run_test: false
66+
service_name: "chatqna-tgi" # Replace with your service name
67+
e2e:
68+
run_test: true
69+
service_name: "chatqna" # Replace with your service name

0 commit comments

Comments
 (0)