You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -158,10 +150,6 @@ Engage in intelligent conversations with your documents using our advanced **Ret
158
150
159
151
Summarize lengthy documents or articles, enabling you to grasp key takeaways quickly. Save time and effort with our intelligent summarization feature!
160
152
161
-
### ❓ FAQ Generation
162
-
163
-
Effortlessly create comprehensive FAQs based on your documents. Ensure your users have access to the information they need with minimal effort!
164
-
165
153
### 💻 Code Generation
166
154
167
155
Boost your coding productivity by providing a description of the functionality you require. Our application generates corresponding code snippets, saving you valuable time and effort!
@@ -237,16 +206,7 @@ Please refer to **[keycloak_setup_guide](keycloak_setup_guide.md)** for more det
237
206
-H 'Content-Type: application/json'
238
207
```
239
208
240
-
5. Reranking Microservice
241
-
242
-
```bash
243
-
curl http://${host_ip}:8000/v1/reranking\
244
-
-X POST \
245
-
-d '{"initial_query":"What is Deep Learning?", "retrieved_docs": [{"text":"Deep Learning is not..."}, {"text":"Deep learning is..."}]}' \
246
-
-H 'Content-Type: application/json'
247
-
```
248
-
249
-
6. LLM backend Service (ChatQnA, DocSum, FAQGen)
209
+
4. LLM backend Service (ChatQnA, DocSum)
250
210
251
211
```bash
252
212
curl http://${host_ip}:9009/generate \
@@ -255,7 +215,7 @@ Please refer to **[keycloak_setup_guide](keycloak_setup_guide.md)** for more det
255
215
-H 'Content-Type: application/json'
256
216
```
257
217
258
-
7. LLM backend Service (CodeGen)
218
+
5. LLM backend Service (CodeGen)
259
219
260
220
```bash
261
221
curl http://${host_ip}:8028/generate \
@@ -264,67 +224,50 @@ Please refer to **[keycloak_setup_guide](keycloak_setup_guide.md)** for more det
264
224
-H 'Content-Type: application/json'
265
225
```
266
226
267
-
8. ChatQnA LLM Microservice
227
+
6. CodeGen LLM Microservice
268
228
269
229
```bash
270
-
curl http://${host_ip}:9000/v1/chat/completions\
230
+
curl http://${host_ip}:9001/v1/chat/completions\
271
231
-X POST \
272
-
-d '{"query":"What is Deep Learning?","max_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"stream":true}' \
232
+
-d '{"query":"def print_hello_world():"}' \
273
233
-H 'Content-Type: application/json'
274
234
```
275
235
276
-
9. CodeGen LLM Microservice
236
+
7. DocSum LLM Microservice
277
237
278
238
```bash
279
-
curl http://${host_ip}:9001/v1/chat/completions\
239
+
curl http://${host_ip}:9003/v1/docsum\
280
240
-X POST \
281
-
-d '{"query":"def print_hello_world():"}' \
241
+
-d '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5", "type": "text"}' \
282
242
-H 'Content-Type: application/json'
283
243
```
284
244
285
-
10. DocSum LLM Microservice
286
-
287
-
```bash
288
-
curl http://${host_ip}:9003/v1/docsum\
289
-
-X POST \
290
-
-d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5"}' \
291
-
-H 'Content-Type: application/json'
292
-
```
293
-
294
-
11. FAQGen LLM Microservice
245
+
8. ChatQnA MegaService
295
246
296
-
```bash
297
-
curl http://${host_ip}:9002/v1/faqgen\
298
-
-X POST \
299
-
-d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5"}' \
"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."
"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.",
- **Generate FAQs from Text via Pasting**: Paste the text to into the text box, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
536
-
537
-
- **Generate FAQs from Text via txt file Upload**: Upload the file in the Upload bar, then click 'Generate FAQ' to produce a condensed FAQ of the content, which will be displayed in the 'FAQ' box below.
0 commit comments