Skip to content

Commit f418d94

Browse files
committed
merge master
1 parent 72d7453 commit f418d94

File tree

101 files changed

+2932
-4461
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

101 files changed

+2932
-4461
lines changed

config/_default/menus/main.en.yaml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2765,13 +2765,13 @@ menu:
27652765
- name: Cloud Resources Schema
27662766
url: infrastructure/resource_catalog/schema/
27672767
parent: infrastructure_resource_catalog
2768-
identifier: infrastructure_resource_catalog_schemat
2769-
weight: 20001
2770-
- name: Governance
2771-
url: infrastructure/resource_catalog/governance/
2768+
identifier: infrastructure_resource_catalog_schema
2769+
weight: 10001
2770+
- name: Policies
2771+
url: infrastructure/resource_catalog/policies/
27722772
parent: infrastructure_resource_catalog
2773-
identifier: infrastructure_resource_catalog_governance
2774-
weight: 20002
2773+
identifier: infrastructure_resource_catalog_policies
2774+
weight: 10002
27752775
- name: Universal Service Monitoring
27762776
url: universal_service_monitoring/
27772777
pre: usm

content/en/actions/datastore/_index.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,6 @@ further_reading:
1313
text: "Build Workflows"
1414
---
1515

16-
{{< callout url="https://docs.google.com/forms/d/1NvW3I0Ep-lQo4FbiSwOEjccoFsS9Ue2wYiYDmCxKDYg/viewform?edit_requested=true" btn_hidden="false" header="Try the Preview!">}}
17-
Datastore is in Preview. Use this form to request access today.
18-
{{< /callout >}}
19-
2016
## Overview
2117

2218
The Actions Datastore offers a scalable, structured data storage solution within Datadog's App Builder and Workflow Automation products. It supports CRUD (Create, Read, Update, and Delete) operations and integrates seamlessly with Datadog's ecosystem to optimize persistent data storage without the need for external databases.

content/en/agent/logs/log_transport.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,9 @@ When logs are sent through HTTPS, use the same [set of proxy settings][3] as the
107107
[2]: /agent/basic_agent_usage/#agent-overhead
108108
[3]: /agent/configuration/proxy/
109109
{{% /tab %}}
110+
110111
{{% tab "TCP" %}}
112+
{{< site-region region="us,eu,us3,us5,ap1" >}}
111113

112114
To enforce TCP transport, update the Agent's [main configuration file][1] (`datadog.yaml`) with:
113115

@@ -123,6 +125,13 @@ To send logs with environment variables, configure the following:
123125

124126
By default, the Datadog Agent sends its logs to Datadog over TLS-encrypted TCP. This requires outbound communication (on port `10516` for Datadog US site and port `443`for Datadog EU site).
125127

128+
{{< /site-region >}}
129+
130+
{{< site-region region="gov" >}}
131+
The TCP endpoint is not supported for this site.
132+
133+
{{< /site-region >}}
134+
126135
[1]: /agent/configuration/agent-configuration-files/
127136
{{% /tab %}}
128137
{{< /tabs >}}

content/en/continuous_delivery/deployments/argocd.md

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,11 @@ Datadog CD Visibility integrates with Argo CD by using [Argo CD Notifications][2
3131

3232
The setup below uses the [Webhook notification service][5] of Argo CD to send notifications to Datadog.
3333

34-
First, add your [Datadog API Key][11] in the `argocd-notifications-secret` secret with the `dd-api-key` key. See [the Argo CD guide][2] for information on modifying the `argocd-notifications-secret`.
34+
First, add your [Datadog API Key][11] in the `argocd-notifications-secret` secret with the `dd-api-key` key. See [the Argo CD guide][2] for information on modifying the `argocd-notifications-secret`. For sending notifications, the setup is different depending on whether you installed Argo CD using Helm or the regular setup (using `kubectl apply`).
3535

36-
Then, modify the `argocd-notifications-cm` ConfigMap to create the notification service, template, and trigger to send notifications to Datadog:
36+
{{< tabs >}}
37+
{{% tab "Regular setup (with kubectl apply)" %}}
38+
Modify the `argocd-notifications-cm` ConfigMap to create the notification service, template, and trigger to send notifications to Datadog:
3739

3840
```yaml
3941
apiVersion: v1
@@ -68,6 +70,44 @@ data:
6870
- when: app.status.operationState.phase == 'Running' and app.status.health.status in ['Healthy', 'Degraded']
6971
send: [cd-visibility-template]
7072
```
73+
{{% /tab %}}
74+
{{% tab "Helm" %}}
75+
If you used Helm to install Argo CD, add the following configuration to your `values.yaml`:
76+
77+
```yaml
78+
notifications:
79+
notifiers:
80+
service.webhook.cd-visibility-webhook: |
81+
url: https://webhook-intake.{{< region-param key="dd_site" code="true" >}}/api/v2/webhook
82+
headers:
83+
- name: "DD-CD-PROVIDER-ARGOCD"
84+
value: "true"
85+
- name: "Content-Type"
86+
value: "application/json"
87+
- name: "DD-API-KEY"
88+
value: $dd-api-key
89+
templates:
90+
template.cd-visibility-template: |
91+
webhook:
92+
cd-visibility-webhook:
93+
method: POST
94+
body: |
95+
{
96+
"app": {{toJson .app}},
97+
"context": {{toJson .context}},
98+
"service_type": {{toJson .serviceType}},
99+
"recipient": {{toJson .recipient}},
100+
"commit_metadata": {{toJson (call .repo.GetCommitMetadata .app.status.operationState.syncResult.revision)}}
101+
}
102+
triggers:
103+
trigger.cd-visibility-trigger: |
104+
- when: app.status.operationState.phase in ['Succeeded', 'Failed', 'Error'] and app.status.health.status in ['Healthy', 'Degraded']
105+
send: [cd-visibility-template]
106+
- when: app.status.operationState.phase == 'Running' and app.status.health.status in ['Healthy', 'Degraded']
107+
send: [cd-visibility-template]
108+
```
109+
{{% /tab %}}
110+
{{< /tabs >}}
71111

72112
The following resources have been added:
73113
1. The `cd-visibility-webhook` service targets the Datadog intake and configures the correct headers for the request. The `DD-API-KEY` header references the `dd-api-key` entry added previously in the `argocd-notifications-secret`.

content/en/data_jobs/airflow.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ To get started, follow the instructions below.
5050
```shell
5151
export OPENLINEAGE_URL=<DD_DATA_OBSERVABILITY_INTAKE>
5252
export OPENLINEAGE_API_KEY=<DD_API_KEY>
53+
# AIRFLOW__OPENLINEAGE__NAMESPACE sets the 'env' tag value in Datadog. You can hardcode this to a different value
5354
export AIRFLOW__OPENLINEAGE__NAMESPACE=${AIRFLOW_ENV_NAME}
5455
```
5556
* Replace `<DD_DATA_OBSERVABILITY_INTAKE>` with `https://data-obs-intake.`{{< region-param key="dd_site" code="true" >}}.
@@ -113,6 +114,7 @@ To get started, follow the instructions below.
113114
#!/bin/sh
114115
export OPENLINEAGE_URL=<DD_DATA_OBSERVABILITY_INTAKE>
115116
export OPENLINEAGE_API_KEY=<DD_API_KEY>
117+
# AIRFLOW__OPENLINEAGE__NAMESPACE sets the 'env' tag value in Datadog. You can hardcode this to a different value
116118
export AIRFLOW__OPENLINEAGE__NAMESPACE=${AIRFLOW_ENV_NAME}
117119
```
118120

@@ -190,7 +192,7 @@ For Astronomer customers using Astro, <a href=https://www.astronomer.io/docs/lea
190192
* replace `<DD_API_KEY>` with your valid [Datadog API key][7].
191193

192194
**Optional:**
193-
* Set `AIRFLOW__OPENLINEAGE__NAMESPACE` with a unique name for your Airflow deployment. This allows Datadog to logically separate this deployment's jobs from those of other Airflow deployments.
195+
* Set `AIRFLOW__OPENLINEAGE__NAMESPACE` with a unique name for the `env` tag on all DAGs in the Airflow deployment. This allows Datadog to logically separate this deployment's jobs from those of other Airflow deployments.
194196
* Set `OPENLINEAGE_CLIENT_LOGGING` to `DEBUG` for the OpenLineage client and its child modules to log at a `DEBUG` logging level. This can be useful for troubleshooting during the configuration of an OpenLineage provider.
195197
196198
See the [Astronomer official guide][10] for managing environment variables for a deployment. See Apache Airflow's [OpenLineage Configuration Reference][6] for other supported configurations of the OpenLineage provider.

content/en/data_jobs/databricks.md

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,11 @@ Follow these steps to enable Data Jobs Monitoring for Databricks.
2121
### Configure the Datadog-Databricks integration
2222

2323
1. In your Databricks workspace, click on your profile in the top right corner and go to **Settings**. Select **Developer** in the left side bar. Next to **Access tokens**, click **Manage**.
24-
1. Click **Generate new token**, enter "Datadog Integration" in the **Comment** field, remove the default value in **Lifetime (days)**, and click **Generate**. Take note of your token.
24+
1. Click **Generate new token**, enter "Datadog Integration" in the **Comment** field, set the **Lifetime (days)** value to the maximum allowed (730 days), and create a reminder to update the token before it expires. Then click **Generate**. Take note of your token.
2525

2626
**Important:**
27-
* For the [Datadog managed init script install (recommended)](?tab=datadogmanagedglobalinitscriptrecommended#install-the-datadog-agent), ensure the user or service principal linked to the token is a <strong>Workspace Admin</strong>.
28-
* For manual init script installation, ensure the user or service principal linked to the token has [CAN VIEW access][9] for the Databricks jobs and clusters you want to monitor.
29-
* Make sure you set the **Lifetime (days)** value to the maximum allowed (730 days) so that the token doesn't expire and the integration doesn't break.
27+
* For the [Datadog managed init script install (recommended)](?tab=datadogmanagedglobalinitscriptrecommended#install-the-datadog-agent), ensure the token's Principal is a <strong>Workspace Admin</strong>.
28+
* For manual init script installation, ensure the token's Principal has [CAN VIEW access][9] for the Databricks jobs and clusters you want to monitor.
3029

3130
As an alternative, follow the [official Databricks documentation][10] to generate an access token for a [service principal][11]. The service principal must have the [<strong>Workspace access</strong> entitlement][17] enabled and the <strong>Workspace Admin</strong> or [CAN VIEW access][9] permissions as described above.
3231
1. In Datadog, open the Databricks integration tile.
@@ -221,13 +220,6 @@ In Datadog, view the [Data Jobs Monitoring][6] page to see a list of all your Da
221220

222221
{{% djm-install-troubleshooting %}}
223222

224-
If the Agent is not installed, view the installation logs located in `/tmp/datadog-djm-init.log`.
225-
226-
If you need further assistance from Datadog support, add the following environment variable to the init script. This ensures that logs are sent to Datadog when a failure occurs.
227-
```shell
228-
export DD_DJM_ADD_LOGS_TO_FAILURE_REPORT=true
229-
```
230-
231223
## Advanced Configuration
232224

233225
### Tag spans at runtime

content/en/data_streams/dotnet.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ Data Streams Monitoring uses message headers to propagate context through Kafka
3838
### Monitoring SQS pipelines
3939
Data Streams Monitoring uses one [message attribute][2] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
4040

41+
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
42+
4143
### Monitoring SNS-to-SQS pipelines
4244
To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][12].
4345

content/en/data_streams/go.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ To start with Data Streams Monitoring, you need recent versions of the Datadog A
3030
### Monitoring Kafka Pipelines
3131
Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
3232

33+
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
34+
3335
#### Automatic Instrumentation
3436

3537
Automatic instrumentation uses [Orchestrion][4] to install dd-trace-go and supports both the Sarama and Confluent Kafka libraries.

content/en/data_streams/java.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,8 @@ Use Datadog's Java tracer, [`dd-trace-java`][6], to collect information from you
7373
### Monitoring SQS pipelines
7474
Data Streams Monitoring uses one [message attribute][3] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
7575

76+
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
77+
7678
### Monitoring SNS-to-SQS pipelines
7779
To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must perform the following additional configuration steps:
7880

content/en/data_streams/nodejs.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,8 @@ Data Streams Monitoring uses message headers to propagate context through Kafka
4242
### Monitoring SQS pipelines
4343
Data Streams Monitoring uses one [message attribute][4] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
4444

45+
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
46+
4547
### Monitoring SNS-to-SQS pipelines
4648
To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][8].
4749

@@ -67,4 +69,4 @@ Data Streams Monitoring propagates context through message headers. If you are u
6769
[5]: https://www.npmjs.com/package/amqplib
6870
[6]: https://www.npmjs.com/package/rhea
6971
[7]: /data_streams/manual_instrumentation/?tab=nodejs
70-
[8]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
72+
[8]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html

content/en/data_streams/python.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,8 @@ Data Streams Monitoring uses message headers to propagate context through Kafka
4242
### Monitoring SQS Pipelines
4343
Data Streams Monitoring uses one [message attribute][4] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
4444

45+
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
46+
4547
### Monitoring Kinesis pipelines
4648
There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.
4749

@@ -66,4 +68,4 @@ Data Streams Monitoring propagates context through message headers. If you are u
6668
[4]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
6769
[5]: https://pypi.org/project/kombu/
6870
[6]: /data_streams/manual_instrumentation/?tab=python
69-
[7]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
71+
[7]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html

content/en/developers/integrations/check_references.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ You can find the complete list of mandatory and optional attributes for the `met
127127
| `integration` | Mandatory | The name of the integration that emits the metric. Must be the normalized version of the `tile.title` from the `manifest.json` file. Any character besides letters, underscores, dashes, and numbers are converted to underscores. For example: `Openstack Controller` -> `openstack_controller`, `ASP.NET` -> `asp_net`, and `CRI-o` -> `cri-o`. |
128128
| `short_name` | Mandatory | A human-readable, abbreviated version of the metric name. Do not repeat the integration name. For example, `postgresql.index_blocks_hit` should be shortened to `idx blks hit`. |
129129
| `curated_metric`| Optional | Marks which metrics for an integration are noteworthy for a given type (`cpu` and `memory` are both accepted). These are displayed in the UI above the other integration metrics. |
130-
| `sample_tags` | Optional | List of example tags associated with the metric. |
130+
| `sample_tags` | Optional | List of example tags associated with the metric, separated by commas without spaces. For example, `host,region,deployment`. |
131131

132132
## Further Reading
133133

content/en/dora_metrics/setup/deployments.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ For deployments identified through APM Deployment Tracking, the change lead time
161161

162162
For service deployments tracked by APM to contribute to change lead time, ensure the following:
163163

164-
### Requirements for calculating change lead time
164+
### Requirements for calculating change lead time
165165
- Your application telemetry is tagged with Git information. You can enable this [in APM][101] or see the [Source Code Integration documentation][102].
166166
- Your repository metadata is synchronized to Datadog through the [GitHub integration][103] or by the `datadog-ci git-metadata upload` command.
167167

@@ -297,6 +297,7 @@ If the two metadata entries are defined for a service, only `extensions[datadogh
297297

298298
- Change lead time stage breakdown metrics are only available for GitHub and GitLab.
299299
- Change lead time is not available for the first deployment of a service that includes Git information.
300+
- Change lead time is not available if the most recent deployment of a service was more than 60 days ago.
300301
- The Change Lead Time calculation includes a maximum of 5000 commits per deployment.
301302
- For rebased branches, *change lead time* calculations consider the new commits created during the rebase, not the original commits.
302303
- When using "Squash" to merge pull requests:

content/en/dynamic_instrumentation/enabling/nodejs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Dynamic Instrumentation is a feature of supporting Datadog tracing libraries. If
2222

2323
1. Install or upgrade your Agent to version [7.45.0][6] or higher.
2424
2. If you don't already have APM enabled, in your Agent configuration, set the `DD_APM_ENABLED` environment variable to `true` and listening to the port `8126/TCP`.
25-
3. Install or upgrade the Node.js tracing library to version 5.39.0 or higher, by following the [relevant instructions][2].
25+
3. Install or upgrade the Node.js tracing library to version 5.48.0 or higher, by following the [relevant instructions][2].
2626
4. If your source code is transpiled during deployment (for example, if using TypeScript), ensure that source maps are published along with the deployed Node.js application.
2727
5. Run your service with Dynamic Instrumentation enabled by setting the `DD_DYNAMIC_INSTRUMENTATION_ENABLED` environment variable to `true`. Specify `DD_SERVICE`, `DD_ENV`, and `DD_VERSION` Unified Service Tags so you can filter and group your instrumentations and target active clients across these dimensions.
2828
6. After starting your service with Dynamic Instrumentation enabled, you can start using Dynamic Instrumentation on the [APM > Dynamic Instrumentation page][3].

0 commit comments

Comments
 (0)