Skip to content

Commit 4cdc836

Browse files
authored
Update manifest.json (#19028)
* Update manifest.json * change overview text * hyphen
1 parent 1583b10 commit 4cdc836

File tree

2 files changed

+13
-5
lines changed

2 files changed

+13
-5
lines changed

databricks/README.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,15 @@ This page is limited to documentation for ingesting Databricks cluster utilizati
99

1010
## Overview
1111

12-
Monitor your [Databricks][1] clusters with the Datadog [Spark integration][2].
12+
Datadog offers several Databricks monitoring capabilities.
1313

14-
This integration unifies logs, infrastructure metrics, and Spark performance metrics, providing real-time visibility into the health of your nodes and the performance of your jobs. It can help you debug errors, fine-tune performance, and identify issues such as inefficient data partitioning or clusters running out of memory.
14+
[Data Jobs Monitoring][25] provides monitoring for your Databricks jobs and clusters. You can detect problematic Databricks jobs and workflows anywhere in your data pipelines, remediate failed and long-running-jobs faster, and optimize cluster resources to reduce costs.
1515

16-
For feature details, see [Monitor Databricks with Datadog][22].
16+
[Cloud Cost Management][26] gives you a view to analyze all your Databricks DBU costs alongside the associated cloud spend.
17+
18+
[Log Management][27] enables you to aggregate and analyze logs from your Databricks jobs & clusters. You can collect these logs as part of [Data Jobs Monitoring][25].
19+
20+
[Infrastructure Monitoring][28] gives you a limited subset of the Data Jobs Monitoring functionality - visibility into the resource utilization of your Databricks clusters and Apache Spark performance metrics.
1721

1822
## Setup
1923

@@ -492,3 +496,7 @@ Additional helpful documentation, links, and articles:
492496
[22]: https://www.datadoghq.com/blog/databricks-monitoring-datadog/
493497
[23]: https://app.datadoghq.com/integrations/spark
494498
[24]: https://docs.databricks.com/en/ingestion/add-data/upload-to-volume.html#upload-files-to-a-unity-catalog-volume
499+
[25]: https://www.datadoghq.com/product/data-jobs-monitoring/
500+
[26]: https://www.datadoghq.com/product/cloud-cost-management/
501+
[27]: https://www.datadoghq.com/product/log-management/
502+
[28]: https://docs.datadoghq.com/integrations/databricks/?tab=driveronly

databricks/manifest.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"configuration": "README.md#Setup",
99
"support": "README.md#Support",
1010
"changelog": "CHANGELOG.md",
11-
"description": "Monitor the performance, reliability, and cost of your Apache Spark and Databricks jobs.",
11+
"description": "Monitor the reliability and cost of your Databricks environment.",
1212
"title": "Databricks",
1313
"media": [],
1414
"classifier_tags": [
@@ -63,4 +63,4 @@
6363
"source": "spark"
6464
}
6565
}
66-
}
66+
}

0 commit comments

Comments
 (0)