Skip to content

Commit 8808b51

Browse files
authored
Rename image name XXX-hpu to XXX-gaudi (#1154)
Signed-off-by: ZePan110 <ze.pan@intel.com>
1 parent 17d4b0c commit 8808b51

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

.github/workflows/_example-workflow.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ jobs:
7777
git clone https://github.com/vllm-project/vllm.git
7878
cd vllm && git rev-parse HEAD && cd ../
7979
fi
80-
if [[ $(grep -c "vllm-hpu:" ${docker_compose_path}) != 0 ]]; then
80+
if [[ $(grep -c "vllm-gaudi:" ${docker_compose_path}) != 0 ]]; then
8181
git clone https://github.com/HabanaAI/vllm-fork.git
8282
cd vllm-fork && git checkout 3c39626 && cd ../
8383
fi

ChatQnA/docker_compose/intel/hpu/gaudi/compose_vllm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ services:
7878
MAX_WARMUP_SEQUENCE_LENGTH: 512
7979
command: --model-id ${RERANK_MODEL_ID} --auto-truncate
8080
vllm-service:
81-
image: ${REGISTRY:-opea}/vllm-hpu:${TAG:-latest}
81+
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
8282
container_name: vllm-gaudi-server
8383
ports:
8484
- "8007:80"

ChatQnA/docker_image_build/build.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,12 +119,12 @@ services:
119119
dockerfile: Dockerfile.cpu
120120
extends: chatqna
121121
image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
122-
vllm-hpu:
122+
vllm-gaudi:
123123
build:
124124
context: vllm-fork
125125
dockerfile: Dockerfile.hpu
126126
extends: chatqna
127-
image: ${REGISTRY:-opea}/vllm-hpu:${TAG:-latest}
127+
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
128128
nginx:
129129
build:
130130
context: GenAIComps

ChatQnA/kubernetes/intel/hpu/gaudi/manifest/chatqna-vllm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1284,7 +1284,7 @@ spec:
12841284
runAsUser: 1000
12851285
seccompProfile:
12861286
type: RuntimeDefault
1287-
image: "opea/vllm-hpu:latest"
1287+
image: "opea/vllm-gaudi:latest"
12881288
args:
12891289
- "--enforce-eager"
12901290
- "--model"

ChatQnA/tests/test_compose_vllm_on_gaudi.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ function build_docker_images() {
2020
git clone https://github.com/HabanaAI/vllm-fork.git && cd vllm-fork && git checkout 3c39626 && cd ../
2121

2222
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
23-
service_list="chatqna chatqna-ui dataprep-redis retriever-redis vllm-hpu nginx"
23+
service_list="chatqna chatqna-ui dataprep-redis retriever-redis vllm-gaudi nginx"
2424
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
2525

2626
docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5

docker_images_list.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
7878
| [opea/llm-tgi](https://hub.docker.com/r/opea/llm-tgi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/tgi/Dockerfile) | The docker image exposed the OPEA LLM microservice upon TGI docker image for GenAI application use |
7979
| [opea/llm-vllm](https://hub.docker.com/r/opea/llm-vllm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/langchain/Dockerfile) | The docker image exposed the OPEA LLM microservice upon vLLM docker image for GenAI application use |
8080
| [opea/llm-vllm-llamaindex](https://hub.docker.com/r/opea/llm-vllm-llamaindex) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/llama_index/Dockerfile) | This docker image exposes OPEA LLM microservices to the llamaindex framework's vLLM Docker image for use by GenAI applications |
81-
| [opea/llava-hpu](https://hub.docker.com/r/opea/llava-hpu) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
81+
| [opea/llava-gaudi](https://hub.docker.com/r/opea/llava-hpu) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
8282
| [opea/lvm-tgi](https://hub.docker.com/r/opea/lvm-tgi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/tgi-llava/Dockerfile) | This docker image is designed to build a large visual model (LVM) microservice using the HuggingFace Text Generation Inference(TGI) framework. The microservice accepts document input and generates a answer to question. |
8383
| [opea/lvm-llava](https://hub.docker.com/r/opea/lvm-llava) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) server for GenAI application use |
8484
| [opea/lvm-llava-svc](https://hub.docker.com/r/opea/lvm-llava-svc) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/Dockerfile) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use |
@@ -106,7 +106,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
106106
| [opea/video-llama-lvm-server](https://hub.docker.com/r/opea/video-llama-lvm-server) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/video-llama/dependency/Dockerfile) | The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) server for GenAI application use |
107107
| [opea/tts](https://hub.docker.com/r/opea/tts) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/speecht5/Dockerfile) | The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use |
108108
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu) | The docker image powered by vllm-project for deploying and serving vllm Models |
109-
| [opea/vllm-hpu]() | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu) | The docker image powered by vllm-fork for deploying and serving vllm-hpu Models |
109+
| [opea/vllm-gaudi]() | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu) | The docker image powered by vllm-fork for deploying and serving vllm-gaudi Models |
110110
| [opea/vllm-openvino](https://hub.docker.com/r/opea/vllm-openvino) | [Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.openvino) | The docker image powered by vllm-project for deploying and serving vllm Models of the Openvino Framework |
111111
| [opea/web-retriever-chroma](https://hub.docker.com/r/opea/web-retriever-chroma) | [Link](https://github.com/opea-project/GenAIComps/tree/main/comps/web_retrievers/chroma/langchain/Dockerfile) | The docker image exposed the OPEA retrieval microservice based on chroma vectordb for GenAI application use |
112112
| [opea/whisper](https://hub.docker.com/r/opea/whisper) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/whisper/dependency/Dockerfile) | The docker image exposed the OPEA Whisper service for GenAI application use |

0 commit comments

Comments
 (0)