Skip to content

Commit aa5c91d

Browse files
authored
Check duplicated dockerfile (#1289)
Signed-off-by: ZePan110 <ze.pan@intel.com>
1 parent b88d09e commit aa5c91d

File tree

77 files changed

+195
-198
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

77 files changed

+195
-198
lines changed

.github/workflows/scripts/check_duplicated_image.py

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
import yaml
1010

1111
images = {}
12+
dockerfiles = {}
13+
errors = []
1214

1315

1416
def check_docker_compose_build_definition(file_path):
@@ -30,18 +32,26 @@ def check_docker_compose_build_definition(file_path):
3032
if not os.path.isfile(dockerfile):
3133
# dockerfile not exists in the current repo context, assume it's in 3rd party context
3234
dockerfile = os.path.normpath(os.path.join(context, build.get("dockerfile", "")))
33-
item = {"file_path": file_path, "service": service, "dockerfile": dockerfile}
35+
item = {"file_path": file_path, "service": service, "dockerfile": dockerfile, "image": image}
3436
if image in images and dockerfile != images[image]["dockerfile"]:
35-
print("ERROR: !!! Found Conflicts !!!")
36-
print(f"Image: {image}, Dockerfile: {dockerfile}, defined in Service: {service}, File: {file_path}")
37-
print(
37+
errors.append(
38+
f"ERROR: !!! Found Conflicts !!!\n"
39+
f"Image: {image}, Dockerfile: {dockerfile}, defined in Service: {service}, File: {file_path}\n"
3840
f"Image: {image}, Dockerfile: {images[image]['dockerfile']}, defined in Service: {images[image]['service']}, File: {images[image]['file_path']}"
3941
)
40-
sys.exit(1)
4142
else:
4243
# print(f"Add Image: {image} Dockerfile: {dockerfile}")
4344
images[image] = item
4445

46+
if dockerfile in dockerfiles and image != dockerfiles[dockerfile]["image"]:
47+
errors.append(
48+
f"WARNING: Different images using the same Dockerfile\n"
49+
f"Dockerfile: {dockerfile}, Image: {image}, defined in Service: {service}, File: {file_path}\n"
50+
f"Dockerfile: {dockerfile}, Image: {dockerfiles[dockerfile]['image']}, defined in Service: {dockerfiles[dockerfile]['service']}, File: {dockerfiles[dockerfile]['file_path']}"
51+
)
52+
else:
53+
dockerfiles[dockerfile] = item
54+
4555

4656
def parse_arg():
4757
parser = argparse.ArgumentParser(
@@ -56,6 +66,12 @@ def main():
5666
for file_path in args.files:
5767
check_docker_compose_build_definition(file_path)
5868
print("SUCCESS: No Conlicts Found.")
69+
if errors:
70+
for error in errors:
71+
print(error)
72+
sys.exit(1)
73+
else:
74+
print("SUCCESS: No Conflicts Found.")
5975
return 0
6076

6177

AudioQnA/docker_image_build/build.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,12 +41,12 @@ services:
4141
dockerfile: comps/asr/src/Dockerfile
4242
extends: audioqna
4343
image: ${REGISTRY:-opea}/asr:${TAG:-latest}
44-
llm-tgi:
44+
llm-textgen:
4545
build:
4646
context: GenAIComps
4747
dockerfile: comps/llms/src/text-generation/Dockerfile
4848
extends: audioqna
49-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
49+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
5050
speecht5-gaudi:
5151
build:
5252
context: GenAIComps

AudioQnA/kubernetes/intel/README_gmc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The AudioQnA application is defined as a Custom Resource (CR) file that the abov
1515
The AudioQnA uses the below prebuilt images if you choose a Xeon deployment
1616

1717
- tgi-service: ghcr.io/huggingface/text-generation-inference:1.4
18-
- llm: opea/llm-tgi:latest
18+
- llm: opea/llm-textgen:latest
1919
- asr: opea/asr:latest
2020
- whisper: opea/whisper:latest
2121
- tts: opea/tts:latest

AvatarChatbot/docker_image_build/build.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,12 @@ services:
2929
dockerfile: comps/asr/src/Dockerfile
3030
extends: avatarchatbot
3131
image: ${REGISTRY:-opea}/asr:${TAG:-latest}
32-
llm-tgi:
32+
llm-textgen:
3333
build:
3434
context: GenAIComps
3535
dockerfile: comps/llms/src/text-generation/Dockerfile
3636
extends: avatarchatbot
37-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
37+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
3838
speecht5-gaudi:
3939
build:
4040
context: GenAIComps

AvatarChatbot/ui/gradio/app_gradio_demo_avatarchatbot.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,7 @@ def initial_process(audio_input, face_input, model_choice):
294294
<p style='font-size: 24px;'>OPEA microservices deployed:
295295
<ul style='font-size: 24px;'>
296296
<li><strong>ASR</strong> (service: opea/whisper-gaudi, model: openai/whisper-small)</li>
297-
<li><strong>LLM 'text-generation'</strong> (service: opea/llm-tgi, model: Intel/neural-chat-7b-v3-3)</li>
297+
<li><strong>LLM 'text-generation'</strong> (service: opea/llm-textgen, model: Intel/neural-chat-7b-v3-3)</li>
298298
<li><strong>TTS</strong> (service: opea/speecht5-gaudi, model: microsoft/speecht5_tts)</li>
299299
<li><strong>Animation</strong> (service: opea/animation, model: wav2lip+gfpgan)</li>
300300
</ul></p>

ChatQnA/docker_compose/intel/hpu/gaudi/how_to_validate_service.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ CONTAINER ID IMAGE COMMAND
4444
28d9a5570246 opea/chatqna-ui:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp chatqna-gaudi-ui-server
4545
bee1132464cd opea/chatqna:latest "python chatqna.py" 2 minutes ago Up 2 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp chatqna-gaudi-backend-server
4646
f810f3b4d329 opea/embedding-tei:latest "python embedding_te…" 2 minutes ago Up 2 minutes 0.0.0.0:6000->6000/tcp, :::6000->6000/tcp embedding-tei-server
47-
325236a01f9b opea/llm-tgi:latest "python llm.py" 2 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp llm-tgi-gaudi-server
47+
325236a01f9b opea/llm-textgen:latest "python llm.py" 2 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp llm-textgen-gaudi-server
4848
2fa17d84605f opea/dataprep-redis:latest "python prepare_doc_…" 2 minutes ago Up 2 minutes 0.0.0.0:6007->6007/tcp, :::6007->6007/tcp dataprep-redis-server
4949
69e1fb59e92c opea/retriever-redis:latest "/home/user/comps/re…" 2 minutes ago Up 2 minutes 0.0.0.0:7000->7000/tcp, :::7000->7000/tcp retriever-redis-server
5050
313b9d14928a opea/reranking-tei:latest "python reranking_te…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp reranking-tei-gaudi-server

ChatQnA/docker_image_build/build.yaml

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -71,24 +71,12 @@ services:
7171
dockerfile: comps/reranks/src/Dockerfile
7272
extends: chatqna
7373
image: ${REGISTRY:-opea}/reranking-tei:${TAG:-latest}
74-
llm-tgi:
74+
llm-textgen:
7575
build:
7676
context: GenAIComps
7777
dockerfile: comps/llms/src/text-generation/Dockerfile
7878
extends: chatqna
79-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
80-
llm-ollama:
81-
build:
82-
context: GenAIComps
83-
dockerfile: comps/llms/src/text-generation/Dockerfile
84-
extends: chatqna
85-
image: ${REGISTRY:-opea}/llm-ollama:${TAG:-latest}
86-
llm-vllm:
87-
build:
88-
context: GenAIComps
89-
dockerfile: comps/llms/src/text-generation/Dockerfile
90-
extends: chatqna
91-
image: ${REGISTRY:-opea}/llm-vllm:${TAG:-latest}
79+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
9280
dataprep-redis:
9381
build:
9482
context: GenAIComps

ChatQnA/kubernetes/intel/cpu/xeon/manifest/chatqna-remote-inference.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -774,7 +774,7 @@ spec:
774774
runAsUser: 1000
775775
seccompProfile:
776776
type: RuntimeDefault
777-
image: "opea/llm-vllm:latest"
777+
image: "opea/llm-textgen:latest"
778778
imagePullPolicy: Always
779779
ports:
780780
- name: llm-uservice

ChatQnA/kubernetes/intel/hpu/gaudi/manifest/chatqna-vllm-remote-inference.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -682,7 +682,7 @@ spec:
682682
runAsUser: 1000
683683
seccompProfile:
684684
type: RuntimeDefault
685-
image: "opea/llm-vllm:latest"
685+
image: "opea/llm-textgen:latest"
686686
imagePullPolicy: Always
687687
ports:
688688
- name: llm-uservice

ChatQnA/kubernetes/intel/hpu/gaudi/manifest/chatqna-vllm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -811,7 +811,7 @@ spec:
811811
runAsUser: 1000
812812
seccompProfile:
813813
type: RuntimeDefault
814-
image: "opea/llm-vllm:latest"
814+
image: "opea/llm-textgen:latest"
815815
imagePullPolicy: Always
816816
ports:
817817
- name: llm-uservice

CodeGen/docker_compose/amd/gpu/rocm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ git clone https://github.com/opea-project/GenAIComps.git
1010
cd GenAIComps
1111

1212
### Build Docker image
13-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
13+
docker build -t opea/llm-textgen:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1414
```
1515

1616
### Build the MegaService Docker Image

CodeGen/docker_compose/amd/gpu/rocm/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ services:
3434
ipc: host
3535
command: --model-id ${CODEGEN_LLM_MODEL_ID} --max-input-length 1024 --max-total-tokens 2048
3636
codegen-llm-server:
37-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
37+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
3838
container_name: codegen-llm-server
3939
depends_on:
4040
codegen-tgi-service:

CodeGen/docker_compose/intel/cpu/xeon/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Should the Docker image you seek not yet be available on Docker Hub, you can bui
1919
```bash
2020
git clone https://github.com/opea-project/GenAIComps.git
2121
cd GenAIComps
22-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
22+
docker build -t opea/llm-textgen:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
2323
```
2424

2525
### 2. Build the MegaService Docker Image
@@ -43,7 +43,7 @@ docker build -t opea/codegen-ui:latest --build-arg https_proxy=$https_proxy --bu
4343

4444
Then run the command `docker images`, you will have the following 3 Docker Images:
4545

46-
- `opea/llm-tgi:latest`
46+
- `opea/llm-textgen:latest`
4747
- `opea/codegen:latest`
4848
- `opea/codegen-ui:latest`
4949

@@ -60,7 +60,7 @@ docker build --no-cache -t opea/codegen-react-ui:latest --build-arg https_proxy=
6060

6161
Then run the command `docker images`, you will have the following 3 Docker Images:
6262

63-
- `opea/llm-tgi:latest`
63+
- `opea/llm-textgen:latest`
6464
- `opea/codegen:latest`
6565
- `opea/codegen-ui:latest`
6666
- `opea/codegen-react-ui:latest` (optional)

CodeGen/docker_compose/intel/cpu/xeon/compose.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ services:
2323
retries: 100
2424
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0
2525
llm:
26-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
27-
container_name: llm-tgi-server
26+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
27+
container_name: llm-textgen-server
2828
depends_on:
2929
tgi-service:
3030
condition: service_healthy

CodeGen/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ First of all, you need to build the Docker images locally. This step can be igno
1111
```bash
1212
git clone https://github.com/opea-project/GenAIComps.git
1313
cd GenAIComps
14-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
14+
docker build -t opea/llm-textgen:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1515
```
1616

1717
### 2. Build the MegaService Docker Image
@@ -46,7 +46,7 @@ docker build --no-cache -t opea/codegen-react-ui:latest --build-arg https_proxy=
4646

4747
Then run the command `docker images`, you will have the following 3 Docker images:
4848

49-
- `opea/llm-tgi:latest`
49+
- `opea/llm-textgen:latest`
5050
- `opea/codegen:latest`
5151
- `opea/codegen-ui:latest`
5252
- `opea/codegen-react-ui:latest`

CodeGen/docker_compose/intel/hpu/gaudi/compose.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ services:
3131
ipc: host
3232
command: --model-id ${LLM_MODEL_ID} --max-input-length 1024 --max-total-tokens 2048
3333
llm:
34-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
35-
container_name: llm-tgi-gaudi-server
34+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
35+
container_name: llm-textgen-gaudi-server
3636
depends_on:
3737
tgi-service:
3838
condition: service_healthy

CodeGen/docker_image_build/build.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@ services:
2323
dockerfile: ./docker/Dockerfile.react
2424
extends: codegen
2525
image: ${REGISTRY:-opea}/codegen-react-ui:${TAG:-latest}
26-
llm-tgi:
26+
llm-textgen:
2727
build:
2828
context: GenAIComps
2929
dockerfile: comps/llms/src/text-generation/Dockerfile
3030
extends: codegen
31-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
31+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}

CodeGen/kubernetes/intel/cpu/xeon/manifest/codegen.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -325,7 +325,7 @@ spec:
325325
runAsUser: 1000
326326
seccompProfile:
327327
type: RuntimeDefault
328-
image: "opea/llm-tgi:latest"
328+
image: "opea/llm-textgen:latest"
329329
imagePullPolicy: IfNotPresent
330330
ports:
331331
- name: llm-uservice

CodeGen/kubernetes/intel/cpu/xeon/manifest/codegen_react_ui.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ spec:
179179
- name: no_proxy
180180
value:
181181
securityContext: {}
182-
image: "opea/llm-tgi:latest"
182+
image: "opea/llm-textgen:latest"
183183
imagePullPolicy: IfNotPresent
184184
ports:
185185
- name: llm-uservice

CodeGen/kubernetes/intel/hpu/gaudi/manifest/codegen.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ spec:
326326
runAsUser: 1000
327327
seccompProfile:
328328
type: RuntimeDefault
329-
image: "opea/llm-tgi:latest"
329+
image: "opea/llm-textgen:latest"
330330
imagePullPolicy: IfNotPresent
331331
ports:
332332
- name: llm-uservice

CodeGen/tests/test_compose_on_gaudi.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ function build_docker_images() {
1919
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
2020

2121
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
22-
service_list="codegen codegen-ui llm-tgi"
22+
service_list="codegen codegen-ui llm-textgen"
2323
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
2424

2525
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
@@ -94,7 +94,7 @@ function validate_microservices() {
9494
"${ip_address}:9000/v1/chat/completions" \
9595
"data: " \
9696
"llm" \
97-
"llm-tgi-gaudi-server" \
97+
"llm-textgen-gaudi-server" \
9898
'{"query":"def print_hello_world():"}'
9999

100100
}

CodeGen/tests/test_compose_on_rocm.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ function build_docker_images() {
1919
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
2020

2121
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
22-
service_list="codegen codegen-ui llm-tgi"
22+
service_list="codegen codegen-ui llm-textgen"
2323
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
2424

2525
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu

CodeGen/tests/test_compose_on_xeon.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ function build_docker_images() {
1919
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
2020

2121
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
22-
service_list="codegen codegen-ui llm-tgi"
22+
service_list="codegen codegen-ui llm-textgen"
2323
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
2424

2525
docker pull ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu
@@ -95,7 +95,7 @@ function validate_microservices() {
9595
"${ip_address}:9000/v1/chat/completions" \
9696
"data: " \
9797
"llm" \
98-
"llm-tgi-server" \
98+
"llm-textgen-server" \
9999
'{"query":"def print_hello_world():"}'
100100

101101
}

CodeTrans/docker_compose/amd/gpu/rocm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ git clone https://github.com/opea-project/GenAIComps.git
1010
cd GenAIComps
1111

1212
### Build Docker image
13-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
13+
docker build -t opea/llm-textgen:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1414
```
1515

1616
### Build the MegaService Docker Image

CodeTrans/docker_compose/amd/gpu/rocm/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ services:
3535
ipc: host
3636
command: --model-id ${CODETRANS_LLM_MODEL_ID}
3737
codetrans-llm-server:
38-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
38+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
3939
container_name: codetrans-llm-server
4040
depends_on:
4141
codetrans-tgi-service:

CodeTrans/docker_compose/intel/cpu/xeon/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ First of all, you need to build Docker Images locally and install the python pac
1919
```bash
2020
git clone https://github.com/opea-project/GenAIComps.git
2121
cd GenAIComps
22-
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
22+
docker build -t opea/llm-textgen:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
2323
```
2424

2525
### 2. Build MegaService Docker Image
@@ -46,7 +46,7 @@ docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-a
4646

4747
Then run the command `docker images`, you will have the following Docker Images:
4848

49-
- `opea/llm-tgi:latest`
49+
- `opea/llm-textgen:latest`
5050
- `opea/codetrans:latest`
5151
- `opea/codetrans-ui:latest`
5252
- `opea/nginx:latest`

CodeTrans/docker_compose/intel/cpu/xeon/compose.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ services:
2323
retries: 100
2424
command: --model-id ${LLM_MODEL_ID} --cuda-graphs 0
2525
llm:
26-
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
27-
container_name: llm-tgi-server
26+
image: ${REGISTRY:-opea}/llm-textgen:${TAG:-latest}
27+
container_name: llm-textgen-server
2828
depends_on:
2929
tgi-service:
3030
condition: service_healthy

CodeTrans/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ First of all, you need to build Docker Images locally and install the python pac
1111
```bash
1212
git clone https://github.com/opea-project/GenAIComps.git
1313
cd GenAIComps
14-
docker build -t opea/llm-tgi:latest --no-cache --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
14+
docker build -t opea/llm-textgen:latest --no-cache --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/src/text-generation/Dockerfile .
1515
```
1616

1717
### 2. Build MegaService Docker Image
@@ -38,7 +38,7 @@ docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-a
3838

3939
Then run the command `docker images`, you will have the following Docker Images:
4040

41-
- `opea/llm-tgi:latest`
41+
- `opea/llm-textgen:latest`
4242
- `opea/codetrans:latest`
4343
- `opea/codetrans-ui:latest`
4444
- `opea/nginx:latest`

0 commit comments

Comments
 (0)