Skip to content

robusta-dev/holmesgpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Solve alerts faster with an AI Agent

How it Works | Installation | LLM Providers | YouTube Demo

Respond to alerts faster, using AI to automatically:

  • Fetch logs, traces, and metrics
  • Determine if issues are application or infrastructure related
  • Find upstream root-causes

Using HolmesGPT, you can transform your existing alerts from this 👇

Before HolmesGPT

To this 👇

example-holmesgpt-analysis

How it Works

HolmesGPT connects AI models with live observability data and organizational knowledge. It uses an agentic loop to analyze data from multiple sources and identify possible root causes.

holmesgpt-architecture-diagram

🔗 Data Sources

HolmesGPT integrates with popular observability and cloud platforms. The following data sources ("toolsets") are built-in. Add your own.

Data Source Status Notes
ArgoCD ArgoCD Get status, history and manifests and more of apps, projects and clusters
AWS RDS AWS RDS Fetch events, instances, slow query logs and more
Confluence Confluence Private runbooks and documentation
Coralogix Logs Coralogix Logs Retrieve logs for any resource
Datetime Datetime Date and time-related operations
Docker Docker Get images, logs, events, history and more
GitHub GitHub 🟡 Beta Remediate alerts by opening pull requests with fixes
DataDog DataDog 🟡 Beta Fetches log data from datadog
Loki Grafana Loki Query logs for Kubernetes resources or any query
Tempo Grafana Tempo Fetch trace info, debug issues like high latency in application.
Helm Helm Release status, chart metadata, and values
Internet Internet Public runbooks, community docs etc
Kafka Kafka Fetch metadata, list consumers and topics or find lagging consumer groups
Kubernetes Kubernetes Pod logs, K8s events, and resource status (kubectl describe)
NewRelic NewRelic 🟡 Beta Investigate alerts, query tracing data
OpenSearch OpenSearch Query health, shard, and settings related info of one or more clusters
Prometheus Prometheus Investigate alerts, query metrics and generate PromQL queries
RabbitMQ RabbitMQ Info about partitions, memory/disk alerts to troubleshoot split-brain scenarios and more
Robusta Robusta Multi-cluster monitoring, historical change data, user-configured runbooks, PromQL graphs and more
Slab Slab Team knowledge base and runbooks on demand

🔐 Data Privacy

By design, HolmesGPT has read-only access and respects RBAC permissions. It is safe to run in production environments.

We do not train HolmesGPT on your data. Data sent to Robusta SaaS is private to your account.

For extra privacy, bring an API key for your own AI model.

🚀 Bi-Directional Integrations With Your Tools

Robusta can investigate alerts - or just answer questions - from the following sources:

Integration Status Notes
Slack 🟡 Beta Demo. Tag HolmesGPT bot in any Slack message
Prometheus/AlertManager Robusta SaaS or HolmesGPT CLI
PagerDuty HolmesGPT CLI only
OpsGenie HolmesGPT CLI only
Jira HolmesGPT CLI only

See it in Action

Installation

You can install HolmesGPT in one of the follow three methods:

  1. Standalone: Run HolmesGPT from your terminal as a CLI tool. Typically installed with Homebrew or Pip/Pipx. Ideal for local use, embedding into shell scripts, or CI/CD pipelines. (E.g. to analyze why a pipeline deploying to Kubernetes failed.)
  2. Web UIs and TUIs: HolmesGPT is embedded in several third-party tools, like Robusta SaaS and K9s (as a plugin).
  3. API: Embed HolmesGPT in your own app to quickly add root-cause-analysis functionality and data correlations across multiple sources like logs, metrics, and events. HolmesGPT exposes an HTTP API and Python SDK, as well as Helm chart to deploy the HTTP server on Kubernetes.

Standalone

Brew
Brew
pipx
Pipx
Docker
Docker
Docker Build
Docker Build
Python Poetry
Poetry

Web UIs and TUIs

Robusta SaaS
Robusta SaaS
K9s Plugin
K9s Plugin

API

Helm Chart
Helm Chart
Python Package
Python API

Supported LLM Providers

Select your LLM provider to see how to set up your API Key.

OpenAI
OpenAI
Anthropic
Anthropic
AWS Bedrock
AWS Bedrock
Google Vertex AI
Google Vertex AI
Gemini
Gemini
Ollama
Ollama

You can also use any OpenAI-compatible models, read here for instructions.

Using HolmesGPT

holmes ask "what pods are unhealthy and why?"

You can also provide files as context:

holmes ask "summarize the key points in this document" -f ./mydocument.txt

You can also load the prompt from a file using the --prompt-file option:

holmes ask --prompt-file ~/long-prompt.txt

Enter interactive mode to ask follow-up questions:
```bash
holmes ask "what pods are unhealthy and why?" --interactive
# or
holmes ask "what pods are unhealthy and why?" -i

Also supported:

HolmesGPT CLI: investigate Prometheus alerts

Pull alerts from AlertManager and investigate them with HolmesGPT:

holmes investigate alertmanager --alertmanager-url http://localhost:9093
# if on Mac OS and using the Holmes Docker image👇
#  holmes investigate alertmanager --alertmanager-url http://docker.for.mac.localhost:9093

To investigate alerts in your browser, sign up for a free trial of Robusta SaaS.

Optional: port-forward to AlertManager before running the command mentioned above (if running Prometheus inside Kubernetes)

kubectl port-forward alertmanager-robusta-kube-prometheus-st-alertmanager-0 9093:9093 &
HolmesGPT CLI: investigate PagerDuty and OpsGenie alerts
holmes investigate opsgenie --opsgenie-api-key <OPSGENIE_API_KEY>
holmes investigate pagerduty --pagerduty-api-key <PAGERDUTY_API_KEY>
# to write the analysis back to the incident as a comment
holmes investigate pagerduty --pagerduty-api-key <PAGERDUTY_API_KEY> --update

For more details, run holmes investigate <source> --help

Customizing HolmesGPT

HolmesGPT can investigate many issues out of the box, with no customization or training. Optionally, you can extend Holmes to improve results:

Custom Data Sources: Add data sources (toolsets) to improve investigations

Custom Runbooks: Give HolmesGPT instructions for known alerts:

  • If using Robusta SaaS: Use the Robusta UI to add runbooks
  • If using the CLI: Use -r flag with custom runbook files or add to ~/.holmes/config.yaml

You can save common settings and API Keys in a config file to avoid passing them from the CLI each time:

Reading settings from a config file

You can save common settings and API keys in config file for re-use. Place the config file in ~/.holmes/config.yaml` or pass it using the --config

You can view an example config file with all available settings here.

License

Distributed under the MIT License. See LICENSE.txt for more information.

Support

If you have any questions, feel free to message us on robustacommunity.slack.com

How to Contribute

Install HolmesGPT from source with Poetry. See Installation for details.

For help, contact us on Slack