LLM Translate is an open-source translation tool that uses Large Language Models (LLMs) to deliver high-quality, nuanced translations across numerous languages. This full-stack project features a robust Python-based backend (FastAPI) and an intuitive Node/React-based frontend, working together to provide a seamless translation experience. It aims to offer advanced translation capabilities with a focus on accuracy, contextual understanding, and user accessibility.
LLM Translate is an open-source tool that leverages Large Language Models (LLMs) to provide high-quality, nuanced translations across a wide range of languages.
Discover what makes LLM Translate a versatile and user-friendly translation solution:
Advanced Translation Capabilities:
- LLM-Powered Translations: Experience intelligent, context-aware translations in real-time.
- Multi-Provider Support: Choose from leading AI providers like OpenAI, Groq, and OpenRouter for translation tasks, ensuring flexibility and access to diverse models.
- Extensive Language Support: Translate between 21 different languages with ease.
- Automatic Source Language Detection: Simply start typing; the tool automatically identifies the input language.
- Text-to-Speech (TTS) Output: Listen to translations with our integrated spoken audio feature. (Please note: This is a beta feature and currently uses an English voice for all languages.)
Enhanced User Experience:
- Sleek Interface: Navigate a clean and modern UI with support for both Light and Dark modes.
- Responsive Design: Enjoy a consistent experience across desktops, tablets, and mobile devices.
- Efficient Workflow:
- Copy to Clipboard: Easily copy source or translated text.
- Swap Languages: Instantly switch between source and target languages and their corresponding text with a single click.
- Keyboard Shortcuts: Speed up common actions using convenient keyboard commands.
- Character Limit Indicators: Keep track of text length to stay within provider limits.
- Accessibility Focused: Designed with ARIA attributes and screen reader compatibility for an inclusive experience.
Translation Management:
- Translation History: Access and manage your recent translations. The last 50 translations are automatically saved for quick reference and reuse.
This combination of powerful translation technology and user-centric design makes LLM Translate an ideal tool for a wide range of translation needs.
A high-level overview of the project:
llm-translate/
├── backend/ # Python FastAPI backend application
│ ├── llm_translate/
│ ├── tests/
│ ├── .env.example
│ ├── Dockerfile
│ ├── main.py
│ └── README.md # Backend specific documentation
├── frontend/ # React Vite frontend application
│ ├── public/
│ ├── src/
│ ├── .env.example
│ ├── Dockerfile
│ ├── index.html
│ └── README.md # Frontend specific documentation
├── docker-compose.yml # Docker Compose configuration
└── README.md # This file (Project Root README)
(Note: The file tree is a representation and might not include all files or exact current structure. Depth is limited for brevity.)
- Git
- Python 3.9+ (for backend)
- Node.js (e.g., >=18.x.x) and npm/yarn (for frontend)
- Docker and Docker Compose (recommended for easiest setup)
git clone https://github.com/samestrin/llm-translate.git
cd llm-translate
Navigate to the backend
directory and follow the instructions in its README.md
file for specific package installations (typically using pip
).
cd backend
# Follow instructions in backend/README.md
cd ..
Navigate to the frontend
directory and follow the instructions in its README.md
file for specific package installations (typically using npm install
or yarn install
).
cd frontend
# Follow instructions in frontend/README.md
cd ..
This is the easiest way to get both the backend and frontend services running together. Ensure Docker and Docker Compose are installed. The docker-compose.yml
file defines the services.
From the project root directory (llm-translate/
):
docker-compose up --build
To run in detached mode:
docker-compose up --build -d
- The backend API will typically be available at
http://localhost:8000
. - The frontend application will typically be available at
http://localhost:3000
.
To stop the services:
docker-compose down
Once the backend and frontend are running (either individually or via Docker Compose), access the frontend URL (e.g., http://localhost:3000
) in your browser. The frontend application provides the user interface to input text, select languages, and receive translations. It interacts with the backend API for translation and text-to-speech functionality.
- Backend: Configured using environment variables in a
.env
file within thebackend
directory. Seebackend/.env.example
andbackend/README.md
for details on required variables such as API keys for LLM providers. - Frontend: Configured using environment variables in a
.env
file within thefrontend
directory (e.g.,VITE_API_URL
to point to the backend). Seefrontend/.env.example
andfrontend/README.md
for details. User-specific settings like dark/light mode theme preferences are also available in the application UI.
The backend provides a RESTful API for translation and Text-to-Speech (TTS) services. Key endpoints include:
/translate/
: For text translation./speak/
: For generating speech from text.
For detailed API documentation, including request/response schemas and parameters, please refer to the API Documentation section in the backend README.md or /docs
on the running backend service, e.g., http://localhost:8000/docs
). The frontend consumes these APIs.
- Backend README: Detailed information about the backend service.
- Frontend README: Detailed information about the frontend application.
- Deployment Guide: Deployment guide for various environments.
Contributions are welcome! Please follow these general guidelines:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Write clean, well-documented code.
- Add or update tests for your changes.
- Ensure all tests pass.
- Submit a pull request.
(Detailed contributing guidelines can be found here: CONTRIBUTING
)
This project is licensed under the MIT License. See the LICENSE file for details.
- Thanks to the creators of FastAPI, OpenAI, Groq, and OpenRouter used by the backend.
- Thanks to the creators of React, Vite, Tailwind CSS, Material UI, and other open-source libraries used in the frontend.