Skip to content

Commit 64cf295

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent 9a2d9cf commit 64cf295

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

CodeGen/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -107,8 +107,8 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
107107

108108
By default, the LLM model is set to a default value as listed below:
109109

110-
| Service | Model |
111-
| ------------ | --------------------------------------------------------------------------------------- |
110+
| Service | Model |
111+
| ------------ | ----------------------------------------------------------------------------------------- |
112112
| LLM_MODEL_ID | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) |
113113

114114
[Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) may be a gated model that requires submitting an access request through Hugging Face. You can replace it with another model for m.

CodeGen/tests/test_compose_on_xeon.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ function build_docker_images() {
5151
service_list="codegen codegen-gradio-ui llm-textgen vllm dataprep retriever embedding"
5252

5353
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
54-
54+
5555
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
5656
docker images && sleep 1s
5757
}
@@ -83,7 +83,7 @@ function start_services() {
8383

8484
# Start Docker Containers
8585
docker compose --profile ${compose_profile} up -d > ${LOG_PATH}/start_services_with_compose.log
86-
86+
8787
n=0
8888
until [[ "$n" -ge 100 ]]; do
8989
docker logs ${llm_container_name} > ${LOG_PATH}/llm_service_start.log 2>&1

0 commit comments

Comments
 (0)