Skip to content

Commit 35a4fef

Browse files
authored
Update Translation README.md for workflow (#907)
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
1 parent a3f9811 commit 35a4fef

File tree

1 file changed

+52
-0
lines changed

1 file changed

+52
-0
lines changed

Translation/README.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,58 @@ Translation architecture shows below:
66

77
![architecture](./assets/img/translation_architecture.png)
88

9+
The Translation example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps). The flow chart below shows the information flow between different microservices for this example.
10+
11+
```mermaid
12+
---
13+
config:
14+
flowchart:
15+
nodeSpacing: 400
16+
rankSpacing: 100
17+
curve: linear
18+
themeVariables:
19+
fontSize: 50px
20+
---
21+
flowchart LR
22+
%% Colors %%
23+
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
24+
classDef orange fill:#FBAA60,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
25+
classDef orchid fill:#C26DBC,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
26+
classDef invisible fill:transparent,stroke:transparent;
27+
style Translation-MegaService stroke:#000000
28+
29+
%% Subgraphs %%
30+
subgraph Translation-MegaService["Translation MegaService "]
31+
direction LR
32+
LLM([LLM MicroService]):::blue
33+
end
34+
subgraph UserInterface[" User Interface "]
35+
direction LR
36+
a([User Input Query]):::orchid
37+
UI([UI server<br>]):::orchid
38+
end
39+
40+
41+
LLM_gen{{LLM Service <br>}}
42+
GW([Translation GateWay<br>]):::orange
43+
NG([Nginx MicroService]):::blue
44+
45+
46+
%% Questions interaction
47+
direction LR
48+
a[User Input Query] --> UI
49+
a[User Input Query] --> |Need Proxy Server|NG
50+
NG --> UI
51+
UI --> GW
52+
GW <==> Translation-MegaService
53+
54+
55+
%% Embedding service flow
56+
direction LR
57+
LLM <-.-> LLM_gen
58+
59+
```
60+
961
This Translation use case performs Language Translation Inference across multiple platforms. Currently, we provide the example for [Intel Gaudi2](https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html) and [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon.html), and we invite contributions from other hardware vendors to expand OPEA ecosystem.
1062

1163
## Deploy Translation Service

0 commit comments

Comments
 (0)