You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* add docs for model serving
* add docs for model serving
* add docs for model serving
* add docs for model serving
* revert to original
* revert to original
* cluster utiliziation tab
* close cluster utilizaiton tab
* nested tabs aren't supported
* nested tabs aren't supported
* comment out spark data
* list of metrics
* try two lists of metrics
* try two lists of metrics
* post david review
* assert contents of set
* remove unused files
* Update databricks/README.md
Co-authored-by: Rosa Trieu <107086888+rtrieu@users.noreply.github.com>
---------
Co-authored-by: Rosa Trieu <107086888+rtrieu@users.noreply.github.com>
Copy file name to clipboardExpand all lines: databricks/README.md
+26-5Lines changed: 26 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
# Agent Check: Databricks
2
2
3
-
<divclass="alert alert-warning">
3
+
<divclass="alert alert-info">
4
4
<ahref="https://docs.datadoghq.com/data_jobs/">Data Jobs Monitoring</a> helps you observe, troubleshoot, and cost-optimize your Databricks jobs and clusters.<br/><br/>
5
-
This page is limited to documentation for ingesting Databricks cluster utilization metrics and logs.
5
+
This page is limited to documentation for ingesting Databricks model serving metrics and cluster utilization data.
6
6
</div>
7
7
8
8
![Databricks default dashboard][21]
@@ -23,11 +23,27 @@ Model serving metrics provide insights into how your Databricks model serving i
23
23
## Setup
24
24
25
25
### Installation
26
+
Gain insight into the health of your model serving infrastructure by following the [Model Serving Configuration](#model-serving-configuration) instructions.
26
27
27
-
Monitor Databricks Spark applications with the [Datadog Spark integration][3]. Install the [Datadog Agent][4] on your clusters following the [configuration](#configuration) instructions for your appropriate cluster. After that, install the [Spark integration][23] on Datadog to autoinstall the Databricks Overview dashboard.
28
+
Monitor Databricks Spark applications with the [Datadog Spark integration][3]. Install the [Datadog Agent][4] on your clusters following the [configuration](#spark-configuration) instructions for your appropriate cluster. Refer to [Spark Configuration](#spark-configuration) instructions.
28
29
29
30
### Configuration
31
+
#### Model Serving Configuration
32
+
1. In your Databricks workspace, click on your profile in the top right corner and go to **Settings**. Select **Developer** in the left side bar. Next to **Access tokens**, click **Manage**.
33
+
2. Click **Generate new token**, enter "Datadog Integration" in the **Comment** field, remove the default value in **Lifetime (days)**, and click **Generate**. Take note of your token.
30
34
35
+
**Important:**
36
+
* Make sure you delete the default value in **Lifetime (days)** so that the token doesn't expire and the integration doesn't break.
37
+
* Ensure the account generating the token has [CAN VIEW access][30] for the Databricks jobs and clusters you want to monitor.
38
+
39
+
As an alternative, follow the [official Databricks documentation][31] to generate an access token for a [service principal][31].
40
+
41
+
3. In Datadog, open the Databricks integration tile.
42
+
4. On the **Configure** tab, click **Add Databricks Workspace**.
43
+
5. Enter a workspace name, your Databricks workspace URL, and the Databricks token you generated.
44
+
6. In the **Select resources to set up collection** section, make sure **Metrics - Model Serving** is **Enabled**.
45
+
46
+
#### Spark Configuration
31
47
Configure the Spark integration to monitor your Apache Spark Cluster on Databricks and collect system and Spark metrics.
32
48
33
49
Each script described below can be modified to suits your needs. For instance, you can:
0 commit comments