You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix vectorestores path issue caused by refactor in PR opea-project/GenAIComps#1159.
Modify docker image name and file path in docker_images_list.md.
Signed-off-by: letonghan <letong.han@intel.com>
Copy file name to clipboardExpand all lines: docker_images_list.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -78,13 +78,13 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
78
78
|[opea/lvm-llama-vision]()|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/llama-vision/Dockerfile)| The docker image exposed the OPEA microservice running LLaMA Vision as a large visual model (LVM) server for GenAI application use |
79
79
|[opea/lvm-predictionguard](https://hub.docker.com/r/opea/lvm-predictionguard)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/predictionguard/Dockerfile)| The docker image exposed the OPEA microservice running PredictionGuard as a large visual model (LVM) server for GenAI application use |
80
80
|[opea/nginx](https://hub.docker.com/r/opea/nginx)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/src/Dockerfile)| The docker image exposed the OPEA nginx microservice for GenAI application use |
81
+
|[opea/pathway](https://hub.docker.com/r/opea/vectorstore-pathway)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/pathway/src/Dockerfile)| The docker image exposed the OPEA Vectorstores microservice with Pathway for GenAI application use |
81
82
|[opea/promptregistry-mongo-server](https://hub.docker.com/r/opea/promptregistry-mongo-server)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/prompt_registry/src/Dockerfile)| The docker image exposes the OPEA Prompt Registry microservices which based on MongoDB database, designed to store and retrieve user's preferred prompts |
82
83
|[opea/reranking]()|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/rerankings/src/Dockerfile)| The docker image exposed the OPEA reranking microservice based on tei docker image for GenAI application use |
83
84
|[opea/retriever]()|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/src/Dockerfile)| The docker image exposed the OPEA retrieval microservice based on milvus vectordb for GenAI application use |
84
85
|[opea/speecht5](https://hub.docker.com/r/opea/speecht5)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/integrations/dependency/speecht5/Dockerfile)| The docker image exposed the OPEA SpeechT5 service for GenAI application use |
85
86
|[opea/speecht5-gaudi](https://hub.docker.com/r/opea/speecht5-gaudi)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu)| The docker image exposed the OPEA SpeechT5 service on Gaudi2 for GenAI application use |
86
87
|[opea/tei-gaudi](https://hub.docker.com/r/opea/tei-gaudi/tags)|[Link](https://github.com/huggingface/tei-gaudi/blob/habana-main/Dockerfile-hpu)| The docker image powered by HuggingFace Text Embedding Inference (TEI) on Gaudi2 for deploying and serving Embedding Models |
87
-
|[opea/vectorstore-pathway](https://hub.docker.com/r/opea/vectorstore-pathway)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/vectorstores/pathway/Dockerfile)| The docker image exposed the OPEA Vectorstores microservice with Pathway for GenAI application use |
88
88
|[opea/lvm-video-llama](https://hub.docker.com/r/opea/lvm-video-llama)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/video-llama/Dockerfile)| The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) server for GenAI application use |
89
89
|[opea/tts](https://hub.docker.com/r/opea/tts)|[Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/Dockerfile)| The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use |
90
90
|[opea/vllm](https://hub.docker.com/r/opea/vllm)|[Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu)| The docker image powered by vllm-project for deploying and serving vllm Models |
0 commit comments