Respond to alerts faster, using AI to automatically:
- Fetch logs, traces, and metrics
- Determine if issues are application or infrastructure related
- Find upstream root-causes
Using HolmesGPT, you can transform your existing alerts from this 👇
To this 👇

HolmesGPT connects AI models with live observability data and organizational knowledge. It uses an agentic loop to analyze data from multiple sources and identify possible root causes.

HolmesGPT integrates with popular observability and cloud platforms. The following data sources ("toolsets") are built-in. Add your own.
Data Source | Status | Notes |
---|---|---|
![]() |
✅ | Get status, history and manifests and more of apps, projects and clusters |
![]() |
✅ | Fetch events, instances, slow query logs and more |
![]() |
✅ | Private runbooks and documentation |
![]() |
✅ | Retrieve logs for any resource |
![]() |
✅ | Date and time-related operations |
![]() |
✅ | Get images, logs, events, history and more |
![]() |
🟡 Beta | Remediate alerts by opening pull requests with fixes |
![]() |
🟡 Beta | Fetches log data from datadog |
![]() |
✅ | Query logs for Kubernetes resources or any query |
![]() |
✅ | Fetch trace info, debug issues like high latency in application. |
![]() |
✅ | Release status, chart metadata, and values |
![]() |
✅ | Public runbooks, community docs etc |
![]() |
✅ | Fetch metadata, list consumers and topics or find lagging consumer groups |
![]() |
✅ | Pod logs, K8s events, and resource status (kubectl describe) |
![]() |
🟡 Beta | Investigate alerts, query tracing data |
![]() |
✅ | Query health, shard, and settings related info of one or more clusters |
![]() |
✅ | Investigate alerts, query metrics and generate PromQL queries |
![]() |
✅ | Info about partitions, memory/disk alerts to troubleshoot split-brain scenarios and more |
![]() |
✅ | Multi-cluster monitoring, historical change data, user-configured runbooks, PromQL graphs and more |
![]() |
✅ | Team knowledge base and runbooks on demand |
By design, HolmesGPT has read-only access and respects RBAC permissions. It is safe to run in production environments.
We do not train HolmesGPT on your data. Data sent to Robusta SaaS is private to your account.
For extra privacy, bring an API key for your own AI model.
Robusta can investigate alerts - or just answer questions - from the following sources:
Integration | Status | Notes |
---|---|---|
Slack | 🟡 Beta | Demo. Tag HolmesGPT bot in any Slack message |
Prometheus/AlertManager | ✅ | Robusta SaaS or HolmesGPT CLI |
PagerDuty | ✅ | HolmesGPT CLI only |
OpsGenie | ✅ | HolmesGPT CLI only |
Jira | ✅ | HolmesGPT CLI only |
You can install HolmesGPT in one of the follow three methods:
- Standalone: Run HolmesGPT from your terminal as a CLI tool. Typically installed with Homebrew or Pip/Pipx. Ideal for local use, embedding into shell scripts, or CI/CD pipelines. (E.g. to analyze why a pipeline deploying to Kubernetes failed.)
- Web UIs and TUIs: HolmesGPT is embedded in several third-party tools, like Robusta SaaS and K9s (as a plugin).
- API: Embed HolmesGPT in your own app to quickly add root-cause-analysis functionality and data correlations across multiple sources like logs, metrics, and events. HolmesGPT exposes an HTTP API and Python SDK, as well as Helm chart to deploy the HTTP server on Kubernetes.
|
||||||
|
|
Select your LLM provider to see how to set up your API Key.
![]() OpenAI |
![]() Anthropic |
![]() AWS Bedrock |
![]() Google Vertex AI |
![]() Gemini |
![]() Ollama |
You can also use any OpenAI-compatible models, read here for instructions.
- In the Robusta SaaS: Go to platform.robusta.dev and use Holmes from your browser
- With HolmesGPT CLI: setup an LLM API key and ask Holmes a question 👇
holmes ask "what pods are unhealthy and why?"
You can also provide files as context:
holmes ask "summarize the key points in this document" -f ./mydocument.txt
You can also load the prompt from a file using the --prompt-file
option:
holmes ask --prompt-file ~/long-prompt.txt
Enter interactive mode to ask follow-up questions:
```bash
holmes ask "what pods are unhealthy and why?" --interactive
# or
holmes ask "what pods are unhealthy and why?" -i
Also supported:
HolmesGPT CLI: investigate Prometheus alerts
Pull alerts from AlertManager and investigate them with HolmesGPT:
holmes investigate alertmanager --alertmanager-url http://localhost:9093
# if on Mac OS and using the Holmes Docker image👇
# holmes investigate alertmanager --alertmanager-url http://docker.for.mac.localhost:9093
To investigate alerts in your browser, sign up for a free trial of Robusta SaaS.
Optional: port-forward to AlertManager before running the command mentioned above (if running Prometheus inside Kubernetes)
kubectl port-forward alertmanager-robusta-kube-prometheus-st-alertmanager-0 9093:9093 &
HolmesGPT CLI: investigate PagerDuty and OpsGenie alerts
holmes investigate opsgenie --opsgenie-api-key <OPSGENIE_API_KEY>
holmes investigate pagerduty --pagerduty-api-key <PAGERDUTY_API_KEY>
# to write the analysis back to the incident as a comment
holmes investigate pagerduty --pagerduty-api-key <PAGERDUTY_API_KEY> --update
For more details, run holmes investigate <source> --help
HolmesGPT can investigate many issues out of the box, with no customization or training. Optionally, you can extend Holmes to improve results:
Custom Data Sources: Add data sources (toolsets) to improve investigations
- If using Robusta SaaS: See Robusta's docs
- If using the CLI: Use
-t
flag with custom toolset files or add to~/.holmes/config.yaml
Custom Runbooks: Give HolmesGPT instructions for known alerts:
- If using Robusta SaaS: Use the Robusta UI to add runbooks
- If using the CLI: Use
-r
flag with custom runbook files or add to~/.holmes/config.yaml
You can save common settings and API Keys in a config file to avoid passing them from the CLI each time:
Reading settings from a config file
You can save common settings and API keys in config file for re-use. Place the config file in ~/.holmes/config.yaml`
or pass it using the --config
You can view an example config file with all available settings here.
Distributed under the MIT License. See LICENSE.txt for more information.
If you have any questions, feel free to message us on robustacommunity.slack.com
Install HolmesGPT from source with Poetry. See Installation for details.
For help, contact us on Slack