Skip to content

Commit 264759d

Browse files
authored
fix path bug for reorg (#801)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com>
1 parent d422929 commit 264759d

File tree

15 files changed

+17
-15
lines changed

15 files changed

+17
-15
lines changed

AgentQnA/tests/4_launch_and_validate_agent_openai.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ function start_agent_and_api_server() {
1616
docker run -d --runtime=runc --name=kdd-cup-24-crag-service -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
1717

1818
echo "Starting Agent services"
19-
cd $WORKDIR/GenAIExamples/AgentQnA/docker/openai
19+
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
2020
bash launch_agent_service_openai.sh
2121
}
2222

AudioQnA/tests/test_compose_on_gaudi.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ function start_services() {
4545
export TTS_SERVICE_PORT=3002
4646
export LLM_SERVICE_PORT=3007
4747

48-
# sed -i "s/backend_address/$ip_address/g" $WORKPATH/docker/ui/svelte/.env
48+
# sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
4949

5050
# Start Docker Containers
5151
docker compose up -d > ${LOG_PATH}/start_services_with_compose.log
@@ -91,7 +91,7 @@ function validate_megaservice() {
9191
}
9292

9393
#function validate_frontend() {
94-
# cd $WORKPATH/docker/ui/svelte
94+
# cd $WORKPATH/ui/svelte
9595
# local conda_env_name="OPEA_e2e"
9696
# export PATH=${HOME}/miniforge3/bin/:$PATH
9797
## conda remove -n ${conda_env_name} --all -y

AudioQnA/tests/test_compose_on_xeon.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ function start_services() {
4444
export TTS_SERVICE_PORT=3002
4545
export LLM_SERVICE_PORT=3007
4646

47-
# sed -i "s/backend_address/$ip_address/g" $WORKPATH/docker/ui/svelte/.env
47+
# sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env
4848

4949
# Start Docker Containers
5050
docker compose up -d > ${LOG_PATH}/start_services_with_compose.log
@@ -81,7 +81,7 @@ function validate_megaservice() {
8181
}
8282

8383
#function validate_frontend() {
84-
# cd $WORKPATH/docker/ui/svelte
84+
# cd $WORKPATH/ui/svelte
8585
# local conda_env_name="OPEA_e2e"
8686
# export PATH=${HOME}/miniforge3/bin/:$PATH
8787
## conda remove -n ${conda_env_name} --all -y

ChatQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ By default, the embedding, reranking and LLM models are set to a default value a
152152
| Reranking | BAAI/bge-reranker-base |
153153
| LLM | Intel/neural-chat-7b-v3-3 |
154154

155-
Change the `xxx_MODEL_ID` in `docker/xxx/set_env.sh` for your needs.
155+
Change the `xxx_MODEL_ID` in `docker_compose/xxx/set_env.sh` for your needs.
156156

157157
For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models) are also supported in ChatQnA. Refer to [this readme](docker_compose/intel/cpu/xeon/README.md) for details.
158158

ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ To construct the Mega Service, we utilize the [GenAIComps](https://github.com/op
107107

108108
```bash
109109
git clone https://github.com/opea-project/GenAIExamples.git
110-
cd GenAIExamples/ChatQnA/docker
110+
cd GenAIExamples/ChatQnA/
111111
docker build --no-cache -t opea/chatqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
112112
cd ../../..
113113
```

CodeGen/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ To set up environment variables for deploying ChatQnA services, follow these ste
6767
3. Set up other environment variables:
6868

6969
```bash
70-
source ./docker/set_env.sh
70+
source ./docker_compose/set_env.sh
7171
```
7272

7373
### Deploy CodeGen using Docker
File renamed without changes.

CodeTrans/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ By default, the LLM model is set to a default value as listed below:
3030
| ------- | ----------------------------- |
3131
| LLM | HuggingFaceH4/mistral-7b-grok |
3232

33-
Change the `LLM_MODEL_ID` in `docker/set_env.sh` for your needs.
33+
Change the `LLM_MODEL_ID` in `docker_compose/set_env.sh` for your needs.
3434

3535
### Setup Environment Variable
3636

@@ -58,7 +58,7 @@ To set up environment variables for deploying Code Translation services, follow
5858
3. Set up other environment variables:
5959

6060
```bash
61-
source ./docker/set_env.sh
61+
source ./docker_compose/set_env.sh
6262
```
6363

6464
### Deploy with Docker

CodeTrans/docker_compose/intel/cpu/xeon/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,8 @@ Change the `LLM_MODEL_ID` below for your needs.
9292
3. Set up other environment variables:
9393

9494
```bash
95-
source ../set_env.sh
95+
cd GenAIExamples/CodeTrans/docker_compose
96+
source ./set_env.sh
9697
```
9798

9899
### Start Microservice Docker Containers

CodeTrans/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,8 @@ Change the `LLM_MODEL_ID` below for your needs.
8484
3. Set up other environment variables:
8585

8686
```bash
87-
source ../set_env.sh
87+
cd GenAIExamples/CodeTrans/docker_compose
88+
source ./set_env.sh
8889
```
8990

9091
### Start Microservice Docker Containers
File renamed without changes.

DocSum/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Currently we support two ways of deploying Document Summarization services with
2525

2626
### Required Models
2727

28-
We set default model as "Intel/neural-chat-7b-v3-3", change "LLM_MODEL_ID" in "set_env.sh" if you want to use other models.
28+
We set default model as "Intel/neural-chat-7b-v3-3", change "LLM_MODEL_ID" in "docker_compose/set_env.sh" if you want to use other models.
2929

3030
```
3131
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
@@ -57,7 +57,7 @@ To set up environment variables for deploying Document Summarization services, f
5757
3. Set up other environment variables:
5858

5959
```bash
60-
source ./docker/set_env.sh
60+
source ./docker_compose/set_env.sh
6161
```
6262

6363
### Deploy using Docker
File renamed without changes.

SearchQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ To set up environment variables for deploying SearchQnA services, follow these s
6060
3. Set up other environment variables:
6161

6262
```bash
63-
source ./docker/set_env.sh
63+
source ./docker_compose/set_env.sh
6464
```
6565

6666
### Deploy SearchQnA on Gaudi
File renamed without changes.

0 commit comments

Comments
 (0)