Skip to content

Commit 5c7a5bd

Browse files
authored
Update Code and README for GenAIComps Refactor (#1285)
Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> Signed-off-by: letonghan <letong.han@intel.com> Signed-off-by: ZePan110 <ze.pan@intel.com> Signed-off-by: WenjiaoYue <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>
1 parent 72f8079 commit 5c7a5bd

File tree

103 files changed

+652
-435
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

103 files changed

+652
-435
lines changed

.github/workflows/_get-test-matrix.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,11 @@ jobs:
6060
base_commit=$(git rev-parse HEAD~1) # push event
6161
fi
6262
merged_commit=$(git log -1 --format='%H')
63+
echo "print all changed files..."
64+
git diff --name-only ${base_commit} ${merged_commit}
6365
changed_files="$(git diff --name-only ${base_commit} ${merged_commit} | \
6466
grep -vE '${{ inputs.diff_excluded_files }}')" || true
65-
echo "changed_files=$changed_files"
67+
echo "filtered changed_files=$changed_files"
6668
export changed_files=$changed_files
6769
export test_mode=${{ inputs.test_mode }}
6870
export WORKSPACE=${{ github.workspace }}

AudioQnA/docker_image_build/build.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ services:
4444
llm-tgi:
4545
build:
4646
context: GenAIComps
47-
dockerfile: comps/llms/text-generation/tgi/Dockerfile
47+
dockerfile: comps/llms/src/text-generation/Dockerfile
4848
extends: audioqna
4949
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
5050
speecht5-gaudi:

AudioQnA/tests/test_compose_on_rocm.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# Copyright (C) 2024 Advanced Micro Devices, Inc.
33
# SPDX-License-Identifier: Apache-2.0
44

5-
set -ex
5+
set -xe
66
IMAGE_REPO=${IMAGE_REPO:-"opea"}
77
IMAGE_TAG=${IMAGE_TAG:-"latest"}
88
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"

AvatarChatbot/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy -
1919

2020
### 3. Build LLM Image
2121

22-
Intel Xeon optimized image hosted in huggingface repo will be used for TGI service: ghcr.io/huggingface/tgi-gaudi:2.0.6 (https://github.com/huggingface/tgi-gaudi)
22+
Intel Gaudi optimized image hosted in huggingface repo will be used for TGI service: ghcr.io/huggingface/tgi-gaudi:2.0.6 (https://github.com/huggingface/tgi-gaudi)
2323

2424
### 4. Build TTS Image
2525

AvatarChatbot/docker_image_build/build.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ services:
3232
llm-tgi:
3333
build:
3434
context: GenAIComps
35-
dockerfile: comps/llms/text-generation/tgi/Dockerfile
35+
dockerfile: comps/llms/src/text-generation/Dockerfile
3636
extends: avatarchatbot
3737
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
3838
speecht5-gaudi:

AvatarChatbot/tests/test_compose_on_gaudi.sh

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,6 @@ function start_services() {
7272

7373
# Start Docker Containers
7474
docker compose up -d > ${LOG_PATH}/start_services_with_compose.log
75-
7675
n=0
7776
until [[ "$n" -ge 200 ]]; do
7877
docker logs tgi-gaudi-server > $LOG_PATH/tgi_service_start.log
@@ -82,7 +81,6 @@ function start_services() {
8281
sleep 5s
8382
n=$((n+1))
8483
done
85-
8684
echo "All services are up and running"
8785
sleep 5s
8886
}

AvatarChatbot/tests/test_compose_on_xeon.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,6 @@ function start_services() {
8282
n=$((n+1))
8383
done
8484
echo "All services are up and running"
85-
sleep 5s
8685
}
8786

8887

ChatQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,7 @@ Here is an example of `Nike 2023` pdf.
297297

298298
```bash
299299
# download pdf file
300-
wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf
300+
wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf
301301
# upload pdf file with dataprep
302302
curl -X POST "http://${host_ip}:6007/v1/dataprep" \
303303
-H "Content-Type: multipart/form-data" \

ChatQnA/docker_compose/amd/gpu/rocm/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Prepare and upload test document
6363

6464
```
6565
# download pdf file
66-
wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf
66+
wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf
6767
# upload pdf file with dataprep
6868
curl -X POST "http://${host_ip}:6007/v1/dataprep" \
6969
-H "Content-Type: multipart/form-data" \
@@ -138,7 +138,7 @@ cd ../../../..
138138

139139
```bash
140140
cd GenAIComps
141-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
141+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
142142
```
143143

144144
Then run the command `docker images`, you will have the following 5 Docker Images:

ChatQnA/docker_compose/intel/cpu/aipc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https
5555

5656
```bash
5757
cd GenAIComps
58-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
58+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
5959
```
6060

6161
Then run the command `docker images`, you will have the following 6 Docker Images:
@@ -188,7 +188,7 @@ For details on how to verify the correctness of the response, refer to [how-to-v
188188

189189
```bash
190190
# download pdf file
191-
wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf
191+
wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf
192192

193193
# upload pdf file with dataprep
194194
curl -X POST "http://${host_ip}:6007/v1/dataprep" \

ChatQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https
161161

162162
```bash
163163
cd GenAIComps
164-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
164+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
165165
```
166166

167167
Then run the command `docker images`, you will have the following 5 Docker Images:
@@ -356,12 +356,12 @@ For details on how to verify the correctness of the response, refer to [how-to-v
356356
357357
If you want to update the default knowledge base, you can use the following commands:
358358
359-
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf). Or
360-
click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser.
359+
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf). Or
360+
click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser.
361361
Or run this command to get the file on a terminal.
362362
363363
```bash
364-
wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf
364+
wget https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf
365365
366366
```
367367

ChatQnA/docker_compose/intel/cpu/xeon/README_pinecone.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https
164164

165165
```bash
166166
cd GenAIComps
167-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
167+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
168168
```
169169

170170
Then run the command `docker images`, you will have the following 5 Docker Images:
@@ -347,8 +347,8 @@ For details on how to verify the correctness of the response, refer to [how-to-v
347347
348348
If you want to update the default knowledge base, you can use the following commands:
349349
350-
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf). Or
351-
click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser.
350+
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf). Or
351+
click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/v1.1/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser.
352352
Or run this command to get the file on a terminal.
353353
354354
```bash

ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ cd ../../../..
122122

123123
```bash
124124
cd GenAIComps
125-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
125+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
126126
```
127127

128128
Then run the command `docker images`, you will have the following 5 Docker Images:

ChatQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https
151151

152152
```bash
153153
cd GenAIComps
154-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
154+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
155155
```
156156

157157
Then run the command `docker images`, you will have the following 5 Docker Images:

ChatQnA/docker_compose/nvidia/gpu/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ cd ../../..
148148

149149
```bash
150150
cd GenAIComps
151-
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
151+
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/3rd_parties/nginx/src/Dockerfile .
152152
```
153153

154154
Then run the command `docker images`, you will have the following 5 Docker Images:

ChatQnA/docker_image_build/build.yaml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ services:
4444
embedding-tei:
4545
build:
4646
context: GenAIComps
47-
dockerfile: comps/embeddings/tei/langchain/Dockerfile
47+
dockerfile: comps/embeddings/src/Dockerfile
4848
extends: chatqna
4949
image: ${REGISTRY:-opea}/embedding-tei:${TAG:-latest}
5050
retriever-redis:
@@ -68,25 +68,25 @@ services:
6868
reranking-tei:
6969
build:
7070
context: GenAIComps
71-
dockerfile: comps/reranks/tei/Dockerfile
71+
dockerfile: comps/reranks/src/Dockerfile
7272
extends: chatqna
7373
image: ${REGISTRY:-opea}/reranking-tei:${TAG:-latest}
7474
llm-tgi:
7575
build:
7676
context: GenAIComps
77-
dockerfile: comps/llms/text-generation/tgi/Dockerfile
77+
dockerfile: comps/llms/src/text-generation/Dockerfile
7878
extends: chatqna
7979
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
8080
llm-ollama:
8181
build:
8282
context: GenAIComps
83-
dockerfile: comps/llms/text-generation/ollama/langchain/Dockerfile
83+
dockerfile: comps/llms/src/text-generation/Dockerfile
8484
extends: chatqna
8585
image: ${REGISTRY:-opea}/llm-ollama:${TAG:-latest}
8686
llm-vllm:
8787
build:
8888
context: GenAIComps
89-
dockerfile: comps/llms/text-generation/vllm/langchain/Dockerfile
89+
dockerfile: comps/llms/src/text-generation/Dockerfile
9090
extends: chatqna
9191
image: ${REGISTRY:-opea}/llm-vllm:${TAG:-latest}
9292
dataprep-redis:
@@ -128,6 +128,6 @@ services:
128128
nginx:
129129
build:
130130
context: GenAIComps
131-
dockerfile: comps/nginx/Dockerfile
131+
dockerfile: comps/3rd_parties/nginx/src/Dockerfile
132132
extends: chatqna
133133
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}

ChatQnA/tests/test_compose_on_rocm.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,8 @@ function start_services() {
7676
sleep 1s
7777
n=$((n+1))
7878
done
79+
80+
echo "all containers start!"
7981
}
8082

8183
function validate_service() {

CodeGen/docker_compose/amd/gpu/rocm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ git clone https://github.com/opea-project/GenAIComps.git
1010
cd GenAIComps
1111

1212
### Build Docker image
13-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
13+
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1414
```
1515

1616
### Build the MegaService Docker Image

CodeGen/docker_compose/amd/gpu/rocm/compose.yaml

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,12 @@ services:
1515
https_proxy: ${https_proxy}
1616
HUGGING_FACE_HUB_TOKEN: ${CODEGEN_HUGGINGFACEHUB_API_TOKEN}
1717
HUGGINGFACEHUB_API_TOKEN: ${CODEGEN_HUGGINGFACEHUB_API_TOKEN}
18+
host_ip: ${host_ip}
19+
healthcheck:
20+
test: ["CMD-SHELL", "curl -f http://$host_ip:${CODEGEN_TGI_SERVICE_PORT:-8028}/health || exit 1"]
21+
interval: 10s
22+
timeout: 10s
23+
retries: 100
1824
shm_size: 1g
1925
devices:
2026
- /dev/kfd:/dev/kfd
@@ -31,15 +37,17 @@ services:
3137
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
3238
container_name: codegen-llm-server
3339
depends_on:
34-
- codegen-tgi-service
40+
codegen-tgi-service:
41+
condition: service_healthy
3542
ports:
3643
- "${CODEGEN_LLM_SERVICE_PORT:-9000}:9000"
3744
ipc: host
3845
environment:
3946
no_proxy: ${no_proxy}
4047
http_proxy: ${http_proxy}
4148
https_proxy: ${https_proxy}
42-
TGI_LLM_ENDPOINT: "http://codegen-tgi-service"
49+
LLM_ENDPOINT: "http://codegen-tgi-service"
50+
LLM_MODEL_ID: ${CODEGEN_LLM_MODEL_ID}
4351
HUGGINGFACEHUB_API_TOKEN: ${CODEGEN_HUGGINGFACEHUB_API_TOKEN}
4452
restart: unless-stopped
4553
codegen-backend-server:

CodeGen/docker_compose/intel/cpu/xeon/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Should the Docker image you seek not yet be available on Docker Hub, you can bui
1919
```bash
2020
git clone https://github.com/opea-project/GenAIComps.git
2121
cd GenAIComps
22-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
22+
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
2323
```
2424

2525
### 2. Build the MegaService Docker Image

CodeGen/docker_compose/intel/cpu/xeon/compose.yaml

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,28 @@ services:
1515
http_proxy: ${http_proxy}
1616
https_proxy: ${https_proxy}
1717
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
18+
host_ip: ${host_ip}
19+
healthcheck:
20+
test: ["CMD-SHELL", "curl -f http://$host_ip:8028/health || exit 1"]
21+
interval: 10s
22+
timeout: 10s
23+
retries: 100
1824
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0
1925
llm:
2026
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
2127
container_name: llm-tgi-server
2228
depends_on:
23-
- tgi-service
29+
tgi-service:
30+
condition: service_healthy
2431
ports:
2532
- "9000:9000"
2633
ipc: host
2734
environment:
2835
no_proxy: ${no_proxy}
2936
http_proxy: ${http_proxy}
3037
https_proxy: ${https_proxy}
31-
TGI_LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
38+
LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
39+
LLM_MODEL_ID: ${LLM_MODEL_ID}
3240
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
3341
restart: unless-stopped
3442
codegen-xeon-backend-server:

CodeGen/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ First of all, you need to build the Docker images locally. This step can be igno
1111
```bash
1212
git clone https://github.com/opea-project/GenAIComps.git
1313
cd GenAIComps
14-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
14+
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1515
```
1616

1717
### 2. Build the MegaService Docker Image

CodeGen/docker_compose/intel/hpu/gaudi/compose.yaml

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@ services:
2020
LIMIT_HPU_GRAPH: true
2121
USE_FLASH_ATTENTION: true
2222
FLASH_ATTENTION_RECOMPUTE: true
23+
healthcheck:
24+
test: ["CMD-SHELL", "sleep 500 && exit 0"]
25+
interval: 1s
26+
timeout: 505s
27+
retries: 1
2328
runtime: habana
2429
cap_add:
2530
- SYS_NICE
@@ -29,15 +34,17 @@ services:
2934
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
3035
container_name: llm-tgi-gaudi-server
3136
depends_on:
32-
- tgi-service
37+
tgi-service:
38+
condition: service_healthy
3339
ports:
3440
- "9000:9000"
3541
ipc: host
3642
environment:
3743
no_proxy: ${no_proxy}
3844
http_proxy: ${http_proxy}
3945
https_proxy: ${https_proxy}
40-
TGI_LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
46+
LLM_ENDPOINT: ${TGI_LLM_ENDPOINT}
47+
LLM_MODEL_ID: ${LLM_MODEL_ID}
4148
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
4249
restart: unless-stopped
4350
codegen-gaudi-backend-server:

CodeGen/docker_image_build/build.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,6 @@ services:
2626
llm-tgi:
2727
build:
2828
context: GenAIComps
29-
dockerfile: comps/llms/text-generation/tgi/Dockerfile
29+
dockerfile: comps/llms/src/text-generation/Dockerfile
3030
extends: codegen
3131
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}

CodeGen/tests/test_compose_on_gaudi.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ function start_services() {
3434
export MEGA_SERVICE_HOST_IP=${ip_address}
3535
export LLM_SERVICE_HOST_IP=${ip_address}
3636
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:7778/v1/codegen"
37+
export host_ip=${ip_address}
3738

3839
sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
3940

CodeGen/tests/test_compose_on_rocm.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ function start_services() {
3939
export CODEGEN_BACKEND_SERVICE_PORT=7778
4040
export CODEGEN_BACKEND_SERVICE_URL="http://${ip_address}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codegen"
4141
export CODEGEN_UI_SERVICE_PORT=5173
42+
export host_ip=${ip_address}
4243

4344
sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
4445

0 commit comments

Comments
 (0)