Skip to content

Commit e62d9ae

Browse files
authoredMar 19, 2025
Merge pull request #48 from apconw/dev
update read me
2 parents 8c0a569 + e58e1b1 commit e62d9ae

File tree

3 files changed

+405
-405
lines changed

3 files changed

+405
-405
lines changed
 

‎README-en.md

+221
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,221 @@
1+
# Large Model Data Assistant
2+
3+
[![中文文档](https://img.shields.io/badge/中文文档-点击查看-orange)](README.md)
4+
5+
🌟 **Project Introduction**
6+
7+
A lightweight, full - link supported, and easily customizable large model application project.
8+
9+
**Compatible with large models such as DeepSeek and Qwen2.5**
10+
11+
This is a one - stop large model application development project built on technologies like Dify, Ollama & Vllm, Sanic, and Text2SQL 📊. It features a modern UI crafted with Vue3, TypeScript, and Vite 5. It supports data graphical Q&A based on large models through ECharts 📈 and can handle tabular Q&A for CSV files 📂. Additionally, it can be easily integrated with third - party open - source RAG systems and retrieval systems 🌐 to support a wide range of general knowledge Q&A.
12+
13+
As a lightweight large model application development project, Sanic - Web 🛠️ supports rapid iteration and expansion, facilitating the quick implementation of large model projects. 🚀
14+
15+
## Architecture Diagram
16+
![image](./images/app-01.png)
17+
18+
## 🎉 **Features**
19+
- **Core Technology Stack**: Dify + Ollama + RAG + (Qwen2.5/DeepSeek) + Text2SQL
20+
- **UI Framework**: Vue 3 + TypeScript + Vite 5
21+
- **Data Q&A**: Integrates ECharts and large models to achieve lightweight graphical data Q&A display via Text2SQL.
22+
- **Table Q&A**: Supports uploading CSV files and provides table data Q&A based on large model summarization, preprocessing, and Text2SQL.
23+
- **General Q&A**: Supports general data Q&A through integration with third - party RAG systems and public network retrieval.
24+
- **Application Architecture**: Serves as a lightweight, full - link, one - stop large model application development framework for easy expansion and implementation.
25+
- **Flexible Deployment**: Supports one - click deployment of all dependent components for large model application development using docker - compose, with zero configuration required.
26+
27+
## Demo
28+
![image](./images/chat-04.gif)
29+
![image](./images/chat-05.png)
30+
![image](./images/chat-01.png)
31+
![image](./images/chat-02.png)
32+
33+
## 💡 Environment Requirements
34+
35+
Before you start, make sure your development environment meets the following minimum requirements:
36+
37+
- **Operating System**: Windows 10/11, macOS M series, Centos/Ubuntu
38+
- **GPU**: For local deployment with Ollama, an NVIDIA GPU is recommended. Alternatively, you can use the CPU mode or purchase an API key from the public network.
39+
- **Memory**: 8GB+
40+
41+
## 🔧 **Prerequisites**
42+
* Python 3.8+
43+
* Poetry 1.8.3+
44+
* Dify 0.7.1+
45+
* Mysql 8.0+
46+
* Node.js 18.12.x+
47+
* Pnpm 9.x
48+
49+
## 📚 **Large Model Deployment**
50+
- [Refer to Ollama Deployment](https://qwen.readthedocs.io/en/latest/run_locally/ollama.html)
51+
- Models: Qwen2.5 7B model
52+
- Models: DeepSeek R1 7B model
53+
- [Alibaba Cloud Public Network API Key](http://aliyun.com/product/bailian)
54+
55+
## ⚙️ **Dify Environment Configuration**
56+
1. **Install Dify**
57+
- [Official Documentation](https://docs.dify.ai/en)
58+
- To assist those new to large model applications, this project provides a one - click solution to start the Dify service for a quick experience.
59+
- Local access address for Dify: http://localhost:18000. You need to register your own account and password.
60+
```bash
61+
# Start the built - in Dify service
62+
cd docker/dify/docker
63+
docker - compose up -d
64+
65+
2. **Configure Dify**
66+
- Add the Ollama large model provider in Dify and configure the Qwen2.5 and DeepSeek R1 models.
67+
- Import the docker/dify/数据问答_v1.1.2_deepseek.yml canvas from the project root directory.
68+
- Copy the API key corresponding to the canvas for use in the following steps.
69+
- After importing the canvas, manually select the locally configured large model and save the settings.
70+
71+
![image](./images/llm-setting.png)
72+
![image](./images/llm-setting-deepseek.png)
73+
![image](./images/import-convas.png)
74+
![image](./images/convas-api-key.png)
75+
76+
## 🚀 Quick Start
77+
The specific steps are as follows:
78+
79+
1. Clone the repository
80+
```bash
81+
git clone https://github.com/apconw/sanic - web.git
82+
```
83+
84+
2. Start the service
85+
86+
- Modify the environment variables starting with DIFY_ in the chat - service of docker - compose.
87+
- Set the DIFY_DATABASE_QA_API_KEY to the API key obtained from the Dify canvas.
88+
89+
```bash
90+
# Start the front - end, back - end services, and middleware
91+
cd docker
92+
docker compose up -d
93+
```
94+
3. Configure Minio
95+
96+
- Access the MinIO service at http://localhost:19001/. The default account is admin, and the password is 12345678.
97+
- Create a bucket named filedata and configure the Access Key.
98+
- Modify the environment variables starting with MINIO_ in the chat - service of docker - compose and restart the service.
99+
100+
```bash
101+
# Restart the front - end, back - end services, and middleware
102+
cd docker
103+
docker compose up -d
104+
```
105+
106+
4. Initialize the data
107+
108+
```bash
109+
# Install the dependency package
110+
pip install pymysql
111+
112+
# For Mac or Linux users
113+
cd docker
114+
./init.sh
115+
116+
# For Windows users
117+
cd common
118+
python initialize_mysql.py
119+
```
120+
121+
5. Access the application
122+
- Front - end service: http://localhost:8081
123+
124+
125+
## 🛠️ Local Development
126+
- Clone the repository
127+
- Deploy large models: Refer to the above Large Model Deployment section to install Ollama and deploy the Qwen2.5 and DeepSeek R1 models.
128+
- Configure Dify: Refer to the above Dify Environment Configuration section to obtain the API key from the Dify canvas and modify the DIFY_DATABASE_QA_API_KEY in the .env.dev file.
129+
- Configure Minio: Modify the Minio - related keys in the .env.dev file.
130+
- Install dependencies and start the services
131+
132+
1. **Back - end dependencies**
133+
- Install Poetry: [Refer to the official Poetry documentation](https://python - poetry.org/docs/)
134+
135+
```bash
136+
# Install Poetry
137+
pip install poetry
138+
139+
# Install dependencies at the root directory
140+
# Set up the domestic mirror
141+
poetry source add --priority = primary mirrors https://pypi.tuna.tsinghua.edu.cn/simple/
142+
poetry install --no - root
143+
```
144+
145+
2. **Install middleware**
146+
```bash
147+
cd docker
148+
docker compose up -d mysql minio
149+
```
150+
3. **Configure Minio**
151+
- Access the MinIO service at http://localhost:19001/. The default account is admin, and the password is 12345678.
152+
- Create a bucket named filedata and configure the Access Key.
153+
- Modify the Minio - related keys in the .env.dev file.
154+
155+
4. **Initialize the data**
156+
- If you are using a local MySQL environment, you need to modify the database connection information in the initialize_mysql source code.
157+
```bash
158+
# For Mac or Linux users
159+
cd docker
160+
./init.sh
161+
162+
# For Windows users
163+
cd common
164+
python initialize_mysql.py
165+
```
166+
167+
5. **Front - end dependencies**
168+
- The front - end is based on the open - source project [chatgpt - vue3 - light - mvp](https://github.com/pdsuwwz/chatgpt - vue3 - light - mvp).
169+
170+
```bash
171+
# Install front - end dependencies and start the service
172+
cd web
173+
174+
# Install pnpm globally
175+
npm install -g pnpm
176+
177+
# Install dependencies
178+
pnpm i
179+
180+
# Start the service
181+
pnpm dev
182+
```
183+
184+
6. **Start the back - end service**
185+
```bash
186+
# Start the back - end service
187+
python serv.py
188+
```
189+
190+
7. **Access the service**
191+
- Front - end service: http://localhost:2048
192+
193+
## 🐳 Build Images
194+
- Execute the following commands to build the images:
195+
```bash
196+
# Build the front - end image
197+
make web - build
198+
199+
# Build the back - end image
200+
make server - build
201+
```
202+
203+
## 🌹 Support
204+
If you find this project useful, please give it a star on GitHub by clicking the [`Star`](https://github.com/apconw/sanic-web) button. Your support is our motivation to keep improving. Thank you! ^_^
205+
206+
207+
## ⭐ Star History
208+
[![Star History Chart](https://api.star-history.com/svg?repos=apconw/sanic-web&type=Date)](https://star-history.com/#apconw/sanic-web&Date)
209+
210+
211+
## QA Community
212+
- Join our large model application community to discuss and share experiences.
213+
- Follow the official WeChat account and click the "WeChat Group" menu to join.
214+
215+
| 微信群 |
216+
| :---------------------------------: |
217+
| ![image](./images/wchat-search.png) |
218+
219+
220+
## License
221+
MIT License | Copyright © 2024 - PRESENT AiAdventurer

0 commit comments

Comments
 (0)
Failed to load comments.