Skip to content

NVIDIA Triton Inference Server Organization

NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs.

This top level GitHub organization host repositories for officially supported backends, including TensorRT, TensorFlow, PyTorch, Python, ONNX Runtime, and OpenVino. The organization also hosts several popular Triton tools, including:

  • Model Analyzer: A tool to analyze the runtime performance of a model and provide an optimized model configuration for Triton Inference Server.

  • Model Navigator: a tool that provides the ability to automate the process of moving a model from source to optimal format and configuration for deployment on Triton Inference Server.

Getting Started

To learn about NVIDIA Triton Inference Server, refer to the Triton developer page and read our Quickstart Guide. Official Triton Docker containers are available from NVIDIA NGC.

Product Documentation

User documentation on Triton features, APIs, and architecture is located in the server documents on GitHub. A table of contents for the user documentation is located in the server README file.

Release Notes, Support Matrix, and Licenses information are available in the NVIDIA Triton Inference Server Documentation.

Examples

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. Additional generic examples can be found in the server documents.

FAQ

For technical questions about Triton Inference Server, please consult the Triton FAQ Guide. Information about future support & updates for Triton can be found in the Dynamo FAQ Guide.

Feedback

Share feedback or ask questions about NVIDIA Triton Inference Server by filing a GitHub issue.

Pinned Loading

  1. server server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 9.1k 1.6k

  2. core core Public

    The core library and APIs implementing the Triton Inference Server.

    C++ 124 106

  3. backend backend Public

    Common source, scripts and utilities for creating Triton backends.

    C++ 318 95

  4. client client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    Python 620 238

  5. model_analyzer model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 473 78

  6. model_navigator model_navigator Public

    Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

    Python 199 26

Repositories

Showing 10 of 36 repositories
  • perf_analyzer Public
    triton-inference-server/perf_analyzer’s past year of commit activity
    C++ 62 BSD-3-Clause 15 8 11 Updated Apr 29, 2025
  • server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    triton-inference-server/server’s past year of commit activity
    Python 9,137 BSD-3-Clause 1,564 683 (3 issues need help) 72 Updated Apr 29, 2025
  • vllm_backend Public
    triton-inference-server/vllm_backend’s past year of commit activity
    Python 249 BSD-3-Clause 26 0 7 Updated Apr 28, 2025
  • core Public

    The core library and APIs implementing the Triton Inference Server.

    triton-inference-server/core’s past year of commit activity
    C++ 124 BSD-3-Clause 106 0 14 Updated Apr 27, 2025
  • client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    triton-inference-server/client’s past year of commit activity
    Python 620 BSD-3-Clause 238 45 28 Updated Apr 26, 2025
  • triton_cli Public

    Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

    triton-inference-server/triton_cli’s past year of commit activity
    Python 62 5 2 3 Updated Apr 26, 2025
  • onnxruntime_backend Public

    The Triton backend for the ONNX Runtime.

    triton-inference-server/onnxruntime_backend’s past year of commit activity
    C++ 143 BSD-3-Clause 63 73 3 Updated Apr 26, 2025
  • dali_backend Public

    The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

    triton-inference-server/dali_backend’s past year of commit activity
    C++ 132 MIT 32 22 (8 issues need help) 7 Updated Apr 25, 2025
  • backend Public

    Common source, scripts and utilities for creating Triton backends.

    triton-inference-server/backend’s past year of commit activity
    C++ 318 BSD-3-Clause 95 0 3 Updated Apr 24, 2025
  • tensorrtllm_backend Public

    The Triton TensorRT-LLM Backend

    triton-inference-server/tensorrtllm_backend’s past year of commit activity
    Python 830 Apache-2.0 121 309 (1 issue needs help) 22 Updated Apr 23, 2025

Top languages

Loading…

Most used topics

Loading…