-
Notifications
You must be signed in to change notification settings - Fork 87
Translation Engine
LLPlayer supports multiple translation engines. Each engine is explained below.
It is a normal translation API, sending subtitles one by one, so accuracy is low, but it works fast and very stable.
Accuracy (A-E) | Speed (A-E) | Price | Local / Remote |
---|---|---|---|
D | A | Free | Remote |
Google Translation V1: https://translate.googleapis.com
Translation accuracy is not good, but it is free and very fast.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
C | B | Free or Paid | Remote |
DeepL: https://www.deepl.com
API key is required with registered account. A certain amount of usage per month is free of charge.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
C | C | Free | Remote |
DeepLX: https://github.com/OwO-Network/DeepLX
This is a chat-style API.
It can retain the context of the subtitles, so the translation is more accurate than normal APIs.
However, it is not stable.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
A - D | B - E | Free | Local |
Ollama: https://ollama.com
It is an LLM engine that runs locally.
Since the AI is run locally, a significant CPU and GPU are required to use it comfortably.
The amount of memory is important for a GPU.
The server must be up and running and the model must be downloaded in the command line.
- Install Ollama
- Download Model from Powershell
$ ollama pull aya-expanse
$ ollama pull gemma3
- Setup Ollama server by launching from GUI or Powershell
# if you need detail log information such as GPU usage
$ $env:OLLAMA_DEBUG=1
$ ollama serve
- (LLPlayer) Setup Ollama endpoint and model in
Subtitles -> Translate
section - (LLPlayer) Clicking
Hello API
button to test API - (LLPlayer) Enable translation from the subtitle button in seek bar
- Ollama's response time can be checked from the console when started with
ollama serve
.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
A - D | B - E | Free | Local |
LM Studio: https://lmstudio.ai
It is an LLM engine that runs locally.
Since the AI is run locally, a considerable CPU and GPU are required to use it comfortably.
The amount of memory is important for a GPU.
The server must be up and running and the model must be downloaded in the GUI.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
A | C | Paid | Remote |
OpenAI API: https://openai.com/index/openai-api
API key is required with registered account.
Billing is required to use the service.
It can translate with very high accuracy.
Accuracy | Speed | Price | Local / Remote |
---|---|---|---|
A | C | Paid | Remote |
Anthropic Claude API: https://www.anthropic.com/api
API key is required with registered account.
Billing is required to use the service.
It can translate with very high accuracy.