You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ChatQnA/benchmark/performance/kubernetes/intel/gaudi/README.md
+2-10Lines changed: 2 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -69,10 +69,6 @@ Results will be displayed in the terminal and saved as CSV file named `1_stats.c
69
69
- Persistent Volume Claim (PVC): This is the recommended approach for production setups. For more details on using PVC, refer to [PVC](https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/README.md#using-persistent-volume).
70
70
- Local Host Path: For simpler testing, ensure that each node involved in the deployment follows the steps above to locally prepare the models. After preparing the models, use `--set global.modelUseHostPath=${MODELDIR}` in the deployment command.
71
71
72
-
- Add OPEA Helm Repository:
73
-
```bash
74
-
python deploy.py --add-repo
75
-
```
76
72
- Label Nodes
77
73
```base
78
74
python deploy.py --add-label --num-nodes 2
@@ -192,13 +188,9 @@ All the test results will come to the folder `GenAIEval/evals/benchmark/benchmar
192
188
193
189
## Teardown
194
190
195
-
After completing the benchmark, use the following commands to clean up the environment:
191
+
After completing the benchmark, use the following command to clean up the environment:
0 commit comments