Skip to content

Commit d2bab99

Browse files
refine readme for reorg (opea-project#782)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent d97882e commit d2bab99

File tree

28 files changed

+60
-60
lines changed

28 files changed

+60
-60
lines changed

AudioQnA/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@
77
88
## Deploy On Xeon
99
```
10-
cd GenAIExamples/AudioQnA/kubernetes/manifests/xeon
10+
cd GenAIExamples/AudioQnA/kubernetes/intel/cpu/xeon/manifests
1111
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
1212
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" audioqna.yaml
1313
kubectl apply -f audioqna.yaml
1414
```
1515
## Deploy On Gaudi
1616
```
17-
cd GenAIExamples/AudioQnA/kubernetes/manifests/gaudi
17+
cd GenAIExamples/AudioQnA/kubernetes/intel/hpu/gaudi/manifests
1818
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
1919
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" audioqna.yaml
2020
kubectl apply -f audioqna.yaml

ChatQnA/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
123123
docker pull opea/chatqna-conversation-ui:latest
124124
```
125125

126-
2. Using the docker images `built from source`: [Guide](docker/xeon/README.md)
126+
2. Using the docker images `built from source`: [Guide](docker_compose/intel/cpu/xeon/README.md)
127127

128128
> Note: The **opea/chatqna-without-rerank:latest** docker image has not been published yet, users need to build this docker image from source.
129129
@@ -139,7 +139,7 @@ By default, the embedding, reranking and LLM models are set to a default value a
139139

140140
Change the `xxx_MODEL_ID` in `docker/xxx/set_env.sh` for your needs.
141141

142-
For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models) are also supported in ChatQnA. Refer to [this readme](docker/xeon/README.md) for details.
142+
For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models) are also supported in ChatQnA. Refer to [this readme](docker_compose/intel/cpu/xeon/README.md) for details.
143143

144144
### Setup Environment Variable
145145

@@ -202,19 +202,19 @@ Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more in
202202
### Deploy ChatQnA on NVIDIA GPU
203203

204204
```bash
205-
cd GenAIExamples/ChatQnA/docker/gpu/
205+
cd GenAIExamples/ChatQnA/docker_compose/nvidia/gpu/
206206
docker compose up -d
207207
```
208208

209-
Refer to the [NVIDIA GPU Guide](./docker/gpu/README.md) for more instructions on building docker images from source.
209+
Refer to the [NVIDIA GPU Guide](./docker_compose/nvidia/gpu/README.md) for more instructions on building docker images from source.
210210

211211
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi with GMC
212212

213213
Refer to the [Kubernetes Guide](./kubernetes/intel/README_gmc.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi with GMC.
214214

215215
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi without GMC
216216

217-
Refer to the [Kubernetes Guide](./kubernetes/kubernetes/intel/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
217+
Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
218218

219219
### Deploy ChatQnA into Kubernetes using Helm Chart
220220

@@ -224,7 +224,7 @@ Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tre
224224

225225
### Deploy ChatQnA on AI PC
226226

227-
Refer to the [AI PC Guide](./docker/aipc/README.md) for instructions on deploying ChatQnA on AI PC.
227+
Refer to the [AI PC Guide](./docker_compose/intel/cpu/aipc/README.md) for instructions on deploying ChatQnA on AI PC.
228228

229229
### Deploy ChatQnA on Red Hat OpenShift Container Platform (RHOCP)
230230

ChatQnA/docker_compose/intel/cpu/aipc/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ Note: Please replace with `host_ip` with you external IP address, do not use loc
159159
> Before running the docker compose command, you need to be in the folder that has the docker compose yaml file
160160
161161
```bash
162-
cd GenAIExamples/ChatQnA/docker/aipc/
162+
cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/aipc/
163163
docker compose up -d
164164

165165
# let ollama service runs

ChatQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ cd ..
147147
Build frontend Docker image via below command:
148148

149149
```bash
150-
cd GenAIExamples/ChatQnA/
150+
cd GenAIExamples/ChatQnA/ui
151151
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
152152
cd ../../../..
153153
```

ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ docker build --no-cache -t opea/retriever-qdrant:latest --build-arg https_proxy=
8585
### 3. Build Rerank Image
8686

8787
```bash
88-
docker build --no-cache -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .
88+
docker build --no-cache -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .`
8989
```
9090

9191
### 4. Build LLM Image
@@ -117,7 +117,7 @@ cd ../../..
117117
Build frontend Docker image via below command:
118118

119119
```bash
120-
cd GenAIExamples/ChatQnA/
120+
cd GenAIExamples/ChatQnA/ui
121121
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
122122
cd ../../../..
123123
```

ChatQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ cd ../..
128128
Construct the frontend Docker image using the command below:
129129

130130
```bash
131-
cd GenAIExamples/ChatQnA/
131+
cd GenAIExamples/ChatQnA/ui
132132
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
133133
cd ../../../..
134134
```
@@ -150,7 +150,7 @@ cd ../../../..
150150
To fortify AI initiatives in production, Guardrails microservice can secure model inputs and outputs, building Trustworthy, Safe, and Secure LLM-based Applications.
151151

152152
```bash
153-
cd GenAIExamples/ChatQnA/docker
153+
cd GenAIComps
154154
docker build -t opea/guardrails-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/guardrails/llama_guard/langchain/Dockerfile .
155155
cd ../../..
156156
```

ChatQnA/docker_compose/nvidia/gpu/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ cd ../../..
5959
Construct the frontend Docker image using the command below:
6060

6161
```bash
62-
cd GenAIExamples/ChatQnA/
62+
cd GenAIExamples/ChatQnA/ui
6363
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
6464
cd ../../../..
6565
```
@@ -132,7 +132,7 @@ Note: Please replace with `host_ip` with you external IP address, do **NOT** use
132132
### Start all the services Docker Containers
133133

134134
```bash
135-
cd GenAIExamples/ChatQnA/docker/gpu/
135+
cd GenAIExamples/ChatQnA/docker_compose/nvidia/gpu/
136136
docker compose up -d
137137
```
138138

ChatQnA/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
## Deploy On Xeon
1212

1313
```
14-
cd GenAIExamples/ChatQnA/kubernetes/manifests/xeon
14+
cd GenAIExamples/ChatQnA/kubernetes/intel/cpu/xeon/manifests
1515
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
1616
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" chatqna.yaml
1717
kubectl apply -f chatqna.yaml
@@ -20,7 +20,7 @@ kubectl apply -f chatqna.yaml
2020
## Deploy On Gaudi
2121

2222
```
23-
cd GenAIExamples/ChatQnA/kubernetes/manifests/gaudi
23+
cd GenAIExamples/ChatQnA/kubernetes/intel/hpu/gaudi/manifests
2424
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
2525
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" chatqna.yaml
2626
kubectl apply -f chatqna.yaml

ChatQnA/tests/test_manifest_on_gaudi.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ case "$1" in
166166
if [ $ret -ne 0 ]; then
167167
exit $ret
168168
fi
169-
pushd ChatQnA/kubernetes/manifests/gaudi
169+
pushd ChatQnA/kubernetes/intel/hpu/gaudi/manifests
170170
set +e
171171
install_and_validate_chatqna_guardrail
172172
popd

ChatQnA/tests/test_manifest_on_xeon.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ case "$1" in
166166
if [ $ret -ne 0 ]; then
167167
exit $ret
168168
fi
169-
pushd ChatQnA/kubernetes/manifests/xeon
169+
pushd ChatQnA/kubernetes/intel/cpu/xeon/manifests
170170
set +e
171171
install_and_validate_chatqna_guardrail
172172
popd

CodeGen/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
## Deploy On Xeon
1313

1414
```
15-
cd GenAIExamples/CodeGen/kubernetes/manifests/xeon
15+
cd GenAIExamples/CodeGen/kubernetes/intel/cpu/xeon/manifests
1616
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
1717
export MODEL_ID="meta-llama/CodeLlama-7b-hf"
1818
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codegen.yaml
@@ -23,7 +23,7 @@ kubectl apply -f codegen.yaml
2323
## Deploy On Gaudi
2424

2525
```
26-
cd GenAIExamples/CodeGen/kubernetes/manifests/gaudi
26+
cd GenAIExamples/CodeGen/kubernetes/intel/hpu/gaudi/manifests
2727
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
2828
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codegen.yaml
2929
kubectl apply -f codegen.yaml

CodeGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Before deploying the react-codegen.yaml file, ensure that you have the following
1717
```
1818
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
1919
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
20-
cd GenAIExamples/CodeGen/kubernetes/manifests/xeon/ui/
20+
cd GenAIExamples/CodeGen/kubernetes/intel/cpu/xeon/manifests/ui/
2121
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-codegen.yaml
2222
```
2323
b. Set the proxies based on your network configuration

CodeTrans/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Change the `MODEL_ID` in `codetrans.yaml` for your needs.
2121
## Deploy On Xeon
2222

2323
```bash
24-
cd GenAIExamples/CodeTrans/kubernetes/manifests/xeon
24+
cd GenAIExamples/CodeTrans/kubernetes/intel/cpu/xeon/manifests
2525
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
2626
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
2727
kubectl apply -f codetrans.yaml
@@ -30,7 +30,7 @@ kubectl apply -f codetrans.yaml
3030
## Deploy On Gaudi
3131

3232
```bash
33-
cd GenAIExamples/CodeTrans/kubernetes/manifests/gaudi
33+
cd GenAIExamples/CodeTrans/kubernetes/intel/hpu/gaudi/manifests
3434
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
3535
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
3636
kubectl apply -f codetrans.yaml

DocSum/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Currently we support two ways of deploying Document Summarization services with
2121
docker pull opea/docsum:latest
2222
```
2323

24-
2. Start services using the docker images `built from source`: [Guide](./docker)
24+
2. Start services using the docker images `built from source`: [Guide](./docker_compose)
2525

2626
### Required Models
2727

DocSum/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
## Deploy On Xeon
1212

1313
```
14-
cd GenAIExamples/DocSum/kubernetes/manifests/xeon
14+
cd GenAIExamples/DocSum/kubernetes/intel/cpu/xeon/manifests
1515
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
1616
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" docsum.yaml
1717
kubectl apply -f docsum.yaml
@@ -20,7 +20,7 @@ kubectl apply -f docsum.yaml
2020
## Deploy On Gaudi
2121

2222
```
23-
cd GenAIExamples/DocSum/kubernetes/manifests/gaudi
23+
cd GenAIExamples/DocSum/kubernetes/intel/hpu/gaudi/manifests
2424
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
2525
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" docsum.yaml
2626
kubectl apply -f docsum.yaml

DocSum/kubernetes/intel/cpu/xeon/manifest/ui/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Before deploying the react-docsum.yaml file, ensure that you have the following
1616
```
1717
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
1818
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
19-
cd GenAIExamples/DocSum/kubernetes/manifests/xeon/ui/
19+
cd GenAIExamples/DocSum/kubernetes/intel/cpu/xeon/manifests/ui/
2020
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-docsum.yaml
2121
```
2222
b. Set the proxies based on your network configuration

FaqGen/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ If use gated models, you also need to provide [huggingface token](https://huggin
1717
## Deploy On Xeon
1818

1919
```
20-
cd GenAIExamples/FaqGen/kubernetes/manifests/xeon
20+
cd GenAIExamples/FaqGen/kubernetes/intel/cpu/xeon/manifests
2121
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
2222
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" faqgen.yaml
2323
kubectl apply -f faqgen.yaml
@@ -26,7 +26,7 @@ kubectl apply -f faqgen.yaml
2626
## Deploy On Gaudi
2727

2828
```
29-
cd GenAIExamples/FaqGen/kubernetes/manifests/gaudi
29+
cd GenAIExamples/FaqGen/kubernetes/intel/hpu/gaudi/manifests
3030
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
3131
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" faqgen.yaml
3232
kubectl apply -f faqgen.yaml

FaqGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Before deploying the react-faqgen.yaml file, ensure that you have the following
1616
```
1717
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
1818
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
19-
cd GenAIExamples/FaqGen/kubernetes/manifests/xeon/ui/
19+
cd GenAIExamples/FaqGen/kubernetes/intel/cpu/xeon/manifests/ui/
2020
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-faqgen.yaml
2121
```
2222
b. Set the proxies based on your network configuration

ProductivitySuite/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ Refer to the [Keycloak Configuration Guide](./docker_compose/intel/cpu/xeon/keyc
2020

2121
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source and running the application via docker compose.
2222

23-
Refer to the [Xeon Kubernetes Guide](./kubernetes/manifests/README.md) for more instruction on deploying the application via kubernetes.
23+
Refer to the [Xeon Kubernetes Guide](./kubernetes/intel/README.md) for more instruction on deploying the application via kubernetes.

ProductivitySuite/kubernetes/intel/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ To begin with, ensure that you have following prerequisites in place:
2727
```
2828
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
2929
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
30-
cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
30+
cd GenAIExamples/ProductivitySuite/kubernetes/intel/cpu/xeon/manifests/
3131
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" *.yaml
3232
```
3333
@@ -48,7 +48,7 @@ To begin with, ensure that you have following prerequisites in place:
4848
## Deploying ProductivitySuite
4949
You can use yaml files in xeon folder to deploy ProductivitySuite with reactUI.
5050
```
51-
cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
51+
cd GenAIExamples/ProductivitySuite/kubernetes/intel/cpu/xeon/manifests/
5252
kubectl apply -f *.yaml
5353
```
5454

0 commit comments

Comments
 (0)