Skip to content

Commit ec61ea2

Browse files
reidliu41reidliu41
and
reidliu41
authored
[Misc] add dify integration (#17895)
Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com>
1 parent c6798ba commit ec61ea2

File tree

5 files changed

+57
-0
lines changed

5 files changed

+57
-0
lines changed
143 KB
Loading
Loading
51.8 KB
Loading
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
(deployment-dify)=
2+
3+
# Dify
4+
5+
[Dify](https://github.com/langgenius/dify) is an open-source LLM app development platform. Its intuitive interface combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features, and more, allowing you to quickly move from prototype to production.
6+
7+
It supports vLLM as a model provider to efficiently serve large language models.
8+
9+
This guide walks you through deploying Dify using a vLLM backend.
10+
11+
## Prerequisites
12+
13+
- Setup vLLM environment
14+
- Install [Docker](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/)
15+
16+
## Deploy
17+
18+
- Start the vLLM server with the supported chat completion model, e.g.
19+
20+
```console
21+
vllm serve Qwen/Qwen1.5-7B-Chat
22+
```
23+
24+
- Start the Dify server with docker compose ([details](https://github.com/langgenius/dify?tab=readme-ov-file#quick-start)):
25+
26+
```console
27+
git clone https://github.com/langgenius/dify.git
28+
cd dify
29+
cd docker
30+
cp .env.example .env
31+
docker compose up -d
32+
```
33+
34+
- Open the browser to access `http://localhost/install`, config the basic login information and login.
35+
36+
- In the top-right user menu (under the profile icon), go to Settings, then click `Model Provider`, and locate the `vLLM` provider to install it.
37+
38+
- Fill in the model provider details as follows:
39+
- **Model Type**: `LLM`
40+
- **Model Name**: `Qwen/Qwen1.5-7B-Chat`
41+
- **API Endpoint URL**: `http://{vllm_server_host}:{vllm_server_port}/v1`
42+
- **Model Name for API Endpoint**: `Qwen/Qwen1.5-7B-Chat`
43+
- **Completion Mode**: `Completion`
44+
45+
:::{image} /assets/deployment/dify-settings.png
46+
:::
47+
48+
- To create a test chatbot, go to `Studio → Chatbot → Create from Blank`, then select Chatbot as the type:
49+
50+
:::{image} /assets/deployment/dify-create-chatbot.png
51+
:::
52+
53+
- Click the chatbot you just created to open the chat interface and start interacting with the model:
54+
55+
:::{image} /assets/deployment/dify-chat.png
56+
:::

docs/source/deployment/frameworks/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ anything-llm
77
bentoml
88
cerebrium
99
chatbox
10+
dify
1011
dstack
1112
helm
1213
lws

0 commit comments

Comments
 (0)