Skip to content

Translation Engine

umlx5h edited this page Apr 7, 2025 · 21 revisions

LLPlayer supports multiple translation engines. Each engine is explained below.

Normal Translation Engine

It is a normal translation API, sending subtitles one by one, so accuracy is low, but it works fast and very stable.

GoogleV1

Accuracy (A-E) Speed (A-E) Price Local / Remote
D A Free Remote

Google Translation V1: https://translate.googleapis.com

Translation accuracy is not good, but it is free and very fast.

DeepL

Accuracy Speed Price Local / Remote
C B Free or Paid Remote

DeepL: https://www.deepl.com

API key is required with registered account. A certain amount of usage per month is free of charge.

DeepLX

Accuracy Speed Price Local / Remote
C C Free Remote

DeepLX: https://github.com/OwO-Network/DeepLX

LLM Translation Engine

This is a chat-style API.
It can retain the context of the subtitles, so the translation is more accurate than normal APIs.
However, it is not stable.

Ollama

Accuracy Speed Price Local / Remote
A - D B - E Free Local

Ollama: https://ollama.com

It is an LLM engine that runs locally.
Since the AI is run locally, a significant CPU and GPU are required to use it comfortably.
The amount of memory is important for a GPU.

The server must be up and running and the model must be downloaded in the command line.

Recommended Models

Setup Overview

  1. Install Ollama
  2. Download Model from Powershell
$ ollama pull aya-expanse
$ ollama pull gemma3
  1. Setup Ollama server by launching from GUI or Powershell
# if you need detail log information such as GPU usage
$ $env:OLLAMA_DEBUG=1
$ ollama serve 
  1. (LLPlayer) Setup Ollama endpoint and model in Subtitles -> Translate section
  2. (LLPlayer) Clicking Hello API button to test API
  3. (LLPlayer) Enable translation from the subtitle button in seek bar
  4. Ollama's response time can be checked from the console when started with ollama serve.

LMStudio

Accuracy Speed Price Local / Remote
A - D B - E Free Local

LM Studio: https://lmstudio.ai

It is an LLM engine that runs locally.
Since the AI is run locally, a considerable CPU and GPU are required to use it comfortably.
The amount of memory is important for a GPU.

The server must be up and running and the model must be downloaded in the GUI.

OpenAI

Accuracy Speed Price Local / Remote
A C Paid Remote

OpenAI API: https://openai.com/index/openai-api

API key is required with registered account.
Billing is required to use the service.

It can translate with very high accuracy.

Claude

Accuracy Speed Price Local / Remote
A C Paid Remote

Anthropic Claude API: https://www.anthropic.com/api

API key is required with registered account.
Billing is required to use the service.

It can translate with very high accuracy.

Clone this wiki locally