You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docker_images_list.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
78
78
|[opea/llm-tgi](https://hub.docker.com/r/opea/llm-tgi)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/tgi/Dockerfile)| The docker image exposed the OPEA LLM microservice upon TGI docker image for GenAI application use |
79
79
|[opea/llm-vllm](https://hub.docker.com/r/opea/llm-vllm)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/langchain/Dockerfile)| The docker image exposed the OPEA LLM microservice upon vLLM docker image for GenAI application use |
80
80
|[opea/llm-vllm-llamaindex](https://hub.docker.com/r/opea/llm-vllm-llamaindex)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/llama_index/Dockerfile)| This docker image exposes OPEA LLM microservices to the llamaindex framework's vLLM Docker image for use by GenAI applications |
81
-
|[opea/llava-hpu](https://hub.docker.com/r/opea/llava-hpu)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu)| The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
81
+
|[opea/llava-gaudi](https://hub.docker.com/r/opea/llava-hpu)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu)| The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
82
82
|[opea/lvm-tgi](https://hub.docker.com/r/opea/lvm-tgi)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/tgi-llava/Dockerfile)| This docker image is designed to build a large visual model (LVM) microservice using the HuggingFace Text Generation Inference(TGI) framework. The microservice accepts document input and generates a answer to question. |
83
83
|[opea/lvm-llava](https://hub.docker.com/r/opea/lvm-llava)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile)| The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) server for GenAI application use |
84
84
|[opea/lvm-llava-svc](https://hub.docker.com/r/opea/lvm-llava-svc)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/Dockerfile)| The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use |
@@ -106,7 +106,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
106
106
|[opea/video-llama-lvm-server](https://hub.docker.com/r/opea/video-llama-lvm-server)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/video-llama/dependency/Dockerfile)| The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) server for GenAI application use |
107
107
|[opea/tts](https://hub.docker.com/r/opea/tts)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/speecht5/Dockerfile)| The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use |
108
108
|[opea/vllm](https://hub.docker.com/r/opea/vllm)|[Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu)| The docker image powered by vllm-project for deploying and serving vllm Models |
109
-
|[opea/vllm-hpu]()|[Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu)| The docker image powered by vllm-fork for deploying and serving vllm-hpu Models|
109
+
|[opea/vllm-gaudi]()|[Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu)| The docker image powered by vllm-fork for deploying and serving vllm-gaudi Models |
110
110
|[opea/vllm-openvino](https://hub.docker.com/r/opea/vllm-openvino)|[Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.openvino)| The docker image powered by vllm-project for deploying and serving vllm Models of the Openvino Framework |
111
111
|[opea/web-retriever-chroma](https://hub.docker.com/r/opea/web-retriever-chroma)|[Link](https://github.com/opea-project/GenAIComps/tree/main/comps/web_retrievers/chroma/langchain/Dockerfile)| The docker image exposed the OPEA retrieval microservice based on chroma vectordb for GenAI application use |
112
112
|[opea/whisper](https://hub.docker.com/r/opea/whisper)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/whisper/dependency/Dockerfile)| The docker image exposed the OPEA Whisper service for GenAI application use |
0 commit comments