Skip to content

Commit edcd7c9

Browse files
Fix code scanning alert no. 21: Uncontrolled data used in path expression (#1171)
Signed-off-by: Mingyuan Qi <mingyuan.qi@intel.com> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
1 parent ef2047b commit edcd7c9

File tree

11 files changed

+30
-47
lines changed

11 files changed

+30
-47
lines changed

EdgeCraftRAG/Dockerfile.server

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,11 @@ RUN useradd -m -s /bin/bash user && \
2323
mkdir -p /home/user && \
2424
chown -R user /home/user/
2525

26+
RUN mkdir /templates && \
27+
chown -R user /templates
28+
COPY ./edgecraftrag/prompt_template/default_prompt.txt /templates/
29+
RUN chown -R user /templates/default_prompt.txt
30+
2631
COPY ./edgecraftrag /home/user/edgecraftrag
2732

2833
RUN mkdir -p /home/user/gradio_cache

EdgeCraftRAG/README.md

Lines changed: 10 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,14 @@ Please follow this link [vLLM with OpenVINO](https://github.com/opea-project/Gen
3232

3333
### Start Edge Craft RAG Services with Docker Compose
3434

35-
If you want to enable vLLM with OpenVINO service, please finish the steps in [Launch vLLM with OpenVINO service](#optional-launch-vllm-with-openvino-service) first.
36-
3735
```bash
3836
cd GenAIExamples/EdgeCraftRAG/docker_compose/intel/gpu/arc
3937

4038
export MODEL_PATH="your model path for all your models"
4139
export DOC_PATH="your doc path for uploading a dir of files"
4240
export GRADIO_PATH="your gradio cache path for transferring files"
41+
# If you have a specific prompt template, please uncomment the following line
42+
# export PROMPT_PATH="your prompt path for prompt templates"
4343

4444
# Make sure all 3 folders have 1000:1000 permission, otherwise
4545
# chown 1000:1000 ${MODEL_PATH} ${DOC_PATH} ${GRADIO_PATH}
@@ -70,49 +70,25 @@ optimum-cli export openvino -m BAAI/bge-small-en-v1.5 ${MODEL_PATH}/BAAI/bge-sma
7070
optimum-cli export openvino -m BAAI/bge-reranker-large ${MODEL_PATH}/BAAI/bge-reranker-large --task sentence-similarity
7171
optimum-cli export openvino -m Qwen/Qwen2-7B-Instruct ${MODEL_PATH}/Qwen/Qwen2-7B-Instruct/INT4_compressed_weights --weight-format int4
7272

73-
docker compose up -d
73+
```
74+
75+
#### Launch services with local inference
7476

77+
```bash
78+
docker compose -f compose.yaml up -d
7579
```
7680

77-
#### (Optional) Launch vLLM with OpenVINO service
81+
#### Launch services with vLLM + OpenVINO inference service
7882

79-
1. Set up Environment Variables
83+
Set up Additional Environment Variables and start with compose_vllm.yaml
8084

8185
```bash
8286
export LLM_MODEL=#your model id
8387
export VLLM_SERVICE_PORT=8008
8488
export vLLM_ENDPOINT="http://${HOST_IP}:${VLLM_SERVICE_PORT}"
8589
export HUGGINGFACEHUB_API_TOKEN=#your HF token
86-
```
87-
88-
2. Uncomment below code in 'GenAIExamples/EdgeCraftRAG/docker_compose/intel/gpu/arc/compose.yaml'
8990

90-
```bash
91-
# vllm-openvino-server:
92-
# container_name: vllm-openvino-server
93-
# image: opea/vllm-arc:latest
94-
# ports:
95-
# - ${VLLM_SERVICE_PORT:-8008}:80
96-
# environment:
97-
# HTTPS_PROXY: ${https_proxy}
98-
# HTTP_PROXY: ${https_proxy}
99-
# VLLM_OPENVINO_DEVICE: GPU
100-
# HF_ENDPOINT: ${HF_ENDPOINT}
101-
# HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
102-
# volumes:
103-
# - /dev/dri/by-path:/dev/dri/by-path
104-
# - $HOME/.cache/huggingface:/root/.cache/huggingface
105-
# devices:
106-
# - /dev/dri
107-
# entrypoint: /bin/bash -c "\
108-
# cd / && \
109-
# export VLLM_CPU_KVCACHE_SPACE=50 && \
110-
# export VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON && \
111-
# python3 -m vllm.entrypoints.openai.api_server \
112-
# --model '${LLM_MODEL}' \
113-
# --max_model_len=1024 \
114-
# --host 0.0.0.0 \
115-
# --port 80"
91+
docker compose -f compose_vllm.yaml up -d
11692
```
11793

11894
### ChatQnA with LLM Example (Command Line)

EdgeCraftRAG/docker_compose/intel/gpu/arc/compose.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ services:
1616
- ${DOC_PATH:-${PWD}}:/home/user/docs
1717
- ${GRADIO_PATH:-${PWD}}:/home/user/gradio_cache
1818
- ${HF_CACHE:-${HOME}/.cache}:/home/user/.cache
19+
- ${PROMPT_PATH:-${PWD}}:/templates/custom
1920
ports:
2021
- ${PIPELINE_SERVICE_PORT:-16010}:${PIPELINE_SERVICE_PORT:-16010}
2122
devices:

EdgeCraftRAG/docker_compose/intel/gpu/arc/compose_vllm.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ services:
1616
- ${DOC_PATH:-${PWD}}:/home/user/docs
1717
- ${GRADIO_PATH:-${PWD}}:/home/user/gradio_cache
1818
- ${HF_CACHE:-${HOME}/.cache}:/home/user/.cache
19+
- ${PROMPT_PATH:-${PWD}}:/templates/custom
1920
ports:
2021
- ${PIPELINE_SERVICE_PORT:-16010}:${PIPELINE_SERVICE_PORT:-16010}
2122
devices:

EdgeCraftRAG/edgecraftrag/components/generator.py

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,13 @@ def __init__(self, llm_model, prompt_template, inference_type, **kwargs):
2626
("\n\n", "\n"),
2727
("\t\n", "\n"),
2828
)
29-
template = prompt_template
30-
self.prompt = (
31-
DocumentedContextRagPromptTemplate.from_file(template)
32-
if os.path.isfile(template)
33-
else DocumentedContextRagPromptTemplate.from_template(template)
34-
)
29+
safe_root = "/templates"
30+
template = os.path.normpath(os.path.join(safe_root, prompt_template))
31+
if not template.startswith(safe_root):
32+
raise ValueError("Invalid template path")
33+
if not os.path.exists(template):
34+
raise ValueError("Template file not exists")
35+
self.prompt = DocumentedContextRagPromptTemplate.from_file(template)
3536
self.llm = llm_model
3637
if isinstance(llm_model, str):
3738
self.model_id = llm_model

EdgeCraftRAG/tests/configs/test_pipeline_local_llm.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
"device": "auto",
3838
"weight": "INT4"
3939
},
40-
"prompt_path": "./edgecraftrag/prompt_template/default_prompt.txt",
40+
"prompt_path": "./default_prompt.txt",
4141
"inference_type": "local"
4242
},
4343
"active": "True"

EdgeCraftRAG/tests/configs/test_pipeline_vllm.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
"device": "auto",
3838
"weight": "INT4"
3939
},
40-
"prompt_path": "./edgecraftrag/prompt_template/default_prompt.txt",
40+
"prompt_path": "./default_prompt.txt",
4141
"inference_type": "vllm"
4242
},
4343
"active": "True"

EdgeCraftRAG/tests/test_compose_vllm_on_arc.sh

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,7 @@ vLLM_ENDPOINT="http://${HOST_IP}:${VLLM_SERVICE_PORT}"
3131
function build_docker_images() {
3232
cd $WORKPATH/docker_image_build
3333
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
34-
service_list="server ui ecrag"
35-
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
34+
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
3635

3736
echo "Build vllm_openvino image from GenAIComps..."
3837
cd $WORKPATH && git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}"

EdgeCraftRAG/tests/test_pipeline_local_llm.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
"device": "auto",
3838
"weight": "INT4"
3939
},
40-
"prompt_path": "./edgecraftrag/prompt_template/default_prompt.txt",
40+
"prompt_path": "./default_prompt.txt",
4141
"inference_type": "local"
4242
},
4343
"active": "True"

EdgeCraftRAG/ui/gradio/default.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ postprocessor: "reranker"
2929

3030
# Generator
3131
generator: "chatqna"
32-
prompt_path: "./edgecraftrag/prompt_template/default_prompt.txt"
32+
prompt_path: "./default_prompt.txt"
3333

3434
# Models
3535
embedding_model_id: "BAAI/bge-small-en-v1.5"

EdgeCraftRAG/ui/gradio/ecrag_client.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def create_update_pipeline(
7878
],
7979
generator=api_schema.GeneratorIn(
8080
# TODO: remove hardcoding
81-
prompt_path="./edgecraftrag/prompt_template/default_prompt.txt",
81+
prompt_path="./default_prompt.txt",
8282
model=api_schema.ModelIn(model_id=llm_id, model_path=llm_path, device=llm_device, weight=llm_weights),
8383
inference_type=llm_infertype,
8484
),

0 commit comments

Comments
 (0)