Skip to content

Commit 0d20121

Browse files
cswattaliciascott
andauthored
DOCS-10432: add autodiscovery to temporal doc (#20049)
* update readme * Update temporal/README.md Co-authored-by: Alicia Scott <aliciascott@users.noreply.github.com> --------- Co-authored-by: Alicia Scott <aliciascott@users.noreply.github.com>
1 parent 7fff41c commit 0d20121

File tree

1 file changed

+82
-4
lines changed

1 file changed

+82
-4
lines changed

temporal/README.md

Lines changed: 82 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,17 +17,28 @@ No additional installation is needed on your server.
1717

1818
### Configuration
1919

20+
<!-- xxx tabs xxx -->
21+
<!-- xxx tab "Host" xxx -->
22+
23+
#### Host
24+
25+
##### Metric collection
26+
2027
1. Configure your Temporal services to expose metrics via a `prometheus` endpoint by following the [official Temporal documentation][10].
2128

2229
2. Edit the `temporal.d/conf.yaml` file located in the `conf.d/` folder at the root of your Agent's configuration directory to start collecting your Temporal performance data.
2330

24-
To get started, configure the `openmetrics_endpoint` option to match the `listenAddress` and `handlerPath` options from your Temporal server configuration.
31+
Configure the `openmetrics_endpoint` option to match the `listenAddress` and `handlerPath` options from your Temporal server configuration.
2532

26-
Note that when Temporal services in a cluster are deployed independently, every service exposes its own metrics. As a result, you need to configure the `prometheus` endpoint for every service that you want to monitor and define a separate `instance` on the integration's configuration for each of them.
33+
```yaml
34+
init_config:
35+
instances:
36+
- openmetrics_endpoint: <LISTEN_ADDRESS>/<HANDLER_PATH>
37+
```
2738
28-
See the [sample temporal.d/conf.yaml][4] for all available configuration options.
39+
Note that when Temporal services in a cluster are deployed independently, every service exposes its own metrics. As a result, you need to configure the `prometheus` endpoint for every service that you want to monitor and define a separate `instance` on the integration's configuration for each of them.
2940

30-
#### Log collection
41+
##### Log collection
3142

3243
1. Collecting logs is disabled by default in the Datadog Agent. Enable it in your `datadog.yaml` file:
3344

@@ -48,6 +59,68 @@ See the [sample temporal.d/conf.yaml][4] for all available configuration options
4859

4960
4. [Restart the Agent][5].
5061

62+
<!-- xxz tab xxx -->
63+
64+
<!-- xxx tab "Containerized" xxx -->
65+
66+
#### Containerized
67+
68+
##### Metric collection
69+
70+
For containerized environments, refer to [Configure integrations with Autodiscovery on Kubernetes][14] or [Configure integrations with Autodiscovery on Docker][15] for instructions on using the parameters below. See the [sample temporal.d/conf.yaml][4] for a complete list of configuration options.
71+
72+
| Parameter | Value |
73+
| -------------------- | ------------------------------------ |
74+
| `<INTEGRATION_NAME>` | `temporal` |
75+
| `<INIT_CONFIG>` | blank or `{}` |
76+
| `<INSTANCES_CONFIG>` | `{"openmetrics_endpoint": "<LISTEN_ADDRESS>/<HANDLER_PATH>"}`, where `<LISTEN_ADDRESS>` and `<HANDLER_PATH>` are replaced by the `listenAddress` and `handlerPath` from your Temporal server configuration. |
77+
78+
Note that when Temporal services in a cluster are deployed independently, every service exposes its own metrics. As a result, you need to configure the `prometheus` endpoint for every service that you want to monitor and define a separate `instance` on the integration's configuration for each of them.
79+
80+
**Example**
81+
82+
The following Kubernetes annotation is applied to a pod under `metadata`, where `<CONTAINER_NAME>` is the name of your Temporal container (or a [custom identifier][16]):
83+
84+
```
85+
ad.datadoghq.com/<CONTAINER_NAME>.checks: |
86+
{
87+
"temporal": {
88+
"init_config": {},
89+
"instances": [{"openmetrics_endpoint": "<LISTEN_ADDRESS>/<HANDLER_PATH>"}]
90+
}
91+
}
92+
```
93+
94+
##### Log collection
95+
96+
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Docker Log Collection][18] or [Kubernetes Log Collection][17].
97+
98+
Apply the following configuration parameter to `logs`:
99+
100+
| Parameter | Value |
101+
| -------------- | --------------------------------------------------- |
102+
| `<LOG_CONFIG>` | `{"source": "temporal", "type": "file", "path": "/var/log/temporal/temporal-server.log"}` |
103+
104+
**Example**
105+
106+
The following Kubernetes annotation is applied to a pod under `metadata`, where `<CONTAINER_NAME>` is the name of your Temporal container (or a [custom identifier][16]):
107+
108+
```
109+
ad.datadoghq.com/<CONTAINER_NAME>.logs: |
110+
[
111+
{
112+
"source": "temporal",
113+
"type": "file",
114+
"path": "/var/log/temporal/temporal-server.log"
115+
}
116+
]
117+
```
118+
119+
<!-- xxz tab xxx -->
120+
121+
<!-- xxz tabs xxx -->
122+
123+
51124
### Validation
52125

53126
[Run the Agent's status subcommand][6] and look for `temporal` under the Checks section.
@@ -94,3 +167,8 @@ Additional helpful documentation, links, and articles:
94167
[11]: https://docs.temporal.io/references/configuration#log
95168
[12]: https://www.datadoghq.com/blog/temporal-server-integration/
96169
[13]: https://docs.datadoghq.com/integrations/temporal_cloud/
170+
[14]: https://docs.datadoghq.com/containers/kubernetes/integrations/
171+
[15]: https://docs.datadoghq.com/containers/docker/integrations/
172+
[16]: https://docs.datadoghq.com/containers/guide/ad_identifiers/
173+
[17]: https://docs.datadoghq.com/agent/kubernetes/log/
174+
[18]: https://docs.datadoghq.com/containers/docker/log/

0 commit comments

Comments
 (0)