You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ChatQnA/README.md
+14-12Lines changed: 14 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -23,14 +23,15 @@ RAG bridges the knowledge gap by dynamically fetching relevant information from
23
23
| Azure | 5th Gen Intel Xeon with Intel AMX | Work-in-progress | Work-in-progress |
24
24
| Intel Tiber AI Cloud | 5th Gen Intel Xeon with Intel AMX | Work-in-progress | Work-in-progress |
25
25
26
-
## Automated Deployment to Ubuntu based system(if not using Terraform) using Intel® Optimized Cloud Modules for **Ansible**
26
+
## Automated Deployment to Ubuntu based system(if not using Terraform) using Intel® Optimized Cloud Modules for **Ansible**
27
27
28
28
To deploy to existing Xeon Ubuntu based system, use our Intel Optimized Cloud Modules for Ansible. This is the same Ansible playbook used by Terraform.
29
29
Use this if you are not using Terraform and have provisioned your system with another tool or manually including bare metal.
30
-
| Operating System | Intel Optimized Cloud Module for Ansible |
1. If you do not have docker installed you can run this script to install docker : `bash docker_compose/install_docker.sh`.
50
51
51
-
2. The default LLM is `meta-llama/Meta-Llama-3-8B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
52
+
2. The default LLM is `meta-llama/Meta-Llama-3-8B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)`or` you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
52
53
53
54
### Quick Start: 1.Setup Environment Variable
54
55
@@ -221,13 +222,14 @@ This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Gene
221
222
In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.
222
223
223
224
Gaudi default compose.yaml
224
-
| MicroService | Open Source Project | HW | Port | Endpoint |
225
+
226
+
| MicroService | Open Source Project | HW | Port | Endpoint |
0 commit comments