Skip to content

Commit 9438d39

Browse files
Update README for some minor issues (#1000)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
1 parent 1929dfd commit 9438d39

File tree

9 files changed

+11
-13
lines changed

9 files changed

+11
-13
lines changed

ChatQnA/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -206,8 +206,6 @@ cd GenAIExamples/ChatQnA/docker_compose/intel/hpu/gaudi/
206206
docker compose up -d
207207
```
208208

209-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
210-
211209
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
212210

213211
### Deploy ChatQnA on Xeon

ChatQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,11 @@ After launching your instance, you can connect to it using SSH (for Linux instan
9797

9898
First of all, you need to build Docker Images locally and install the python package of it.
9999

100+
```bash
101+
git clone https://github.com/opea-project/GenAIComps.git
102+
cd GenAIComps
103+
```
104+
100105
### 1. Build Retriever Image
101106

102107
```bash

ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ Build frontend Docker image that enables Conversational experience with ChatQnA
111111
**Export the value of the public IP address of your Xeon server to the `host_ip` environment variable**
112112

113113
```bash
114-
cd GenAIExamples/ChatQnA//ui
114+
cd GenAIExamples/ChatQnA/ui
115115
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8912/v1/chatqna"
116116
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6043/v1/dataprep"
117117
docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg BACKEND_SERVICE_ENDPOINT=$BACKEND_SERVICE_ENDPOINT --build-arg DATAPREP_SERVICE_ENDPOINT=$DATAPREP_SERVICE_ENDPOINT -f ./docker/Dockerfile.react .

ChatQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,11 @@ curl http://${host_ip}:8888/v1/chatqna \
7070

7171
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
7272

73+
```bash
74+
git clone https://github.com/opea-project/GenAIComps.git
75+
cd GenAIComps
76+
```
77+
7378
### 1. Build Retriever Image
7479

7580
```bash

CodeGen/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,8 +132,6 @@ cd GenAIExamples/CodeGen/docker_compose/intel/hpu/gaudi
132132
docker compose up -d
133133
```
134134

135-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
136-
137135
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
138136

139137
#### Deploy CodeGen on Xeon

CodeTrans/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,8 +72,6 @@ cd GenAIExamples/CodeTrans/docker_compose/intel/hpu/gaudi
7272
docker compose up -d
7373
```
7474

75-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
76-
7775
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
7876

7977
#### Deploy Code Translation on Xeon

DocSum/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,6 @@ cd GenAIExamples/DocSum/docker_compose/intel/hpu/gaudi/
7171
docker compose -f compose.yaml up -d
7272
```
7373

74-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
75-
7674
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
7775

7876
#### Deploy on Xeon

SearchQnA/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -137,8 +137,6 @@ cd GenAIExamples/SearchQnA/docker_compose/intel/hpu/gaudi/
137137
docker compose up -d
138138
```
139139

140-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
141-
142140
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
143141

144142
### Deploy SearchQnA on Xeon

VisualQnA/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,8 +76,6 @@ cd GenAIExamples/VisualQnA/docker_compose/intel/hpu/gaudi/
7676
docker compose up -d
7777
```
7878

79-
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
80-
8179
### Deploy VisualQnA on Xeon
8280

8381
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.

0 commit comments

Comments
 (0)