diff --git a/config/_default/menus/main.en.yaml b/config/_default/menus/main.en.yaml
index ebc5a017c32b7..b92f3e9eeb0d0 100644
--- a/config/_default/menus/main.en.yaml
+++ b/config/_default/menus/main.en.yaml
@@ -4355,26 +4355,36 @@ menu:
identifier: data_streams
parent: apm_heading
weight: 50000
+ - name: Supported Languages
+ url: data_streams/#setup
+ identifier: data_streams_supported_languages
+ parent: data_streams
+ weight: 1
+ - name: Supported Technologies
+ url: data_streams/#setup
+ identifier: data_streams_supported_technologies
+ parent: data_streams
+ weight: 2
- name: Schema Tracking
url: data_streams/schema_tracking
identifier: data_streams_schema_tracking
parent: data_streams
- weight: 1
+ weight: 3
- name: Live Messages
url: data_streams/live_messages
identifier: data_streams_live_messages
parent: data_streams
- weight: 2
+ weight: 4
- name: Data Pipeline Lineage
url: data_streams/data_pipeline_lineage
identifier: data_streams_pipeline_lineage
parent: data_streams
- weight: 3
+ weight: 5
- name: Guide
url: data_streams/guide
identifier: data_streams_guide
parent: data_streams
- weight: 4
+ weight: 6
- name: Data Jobs Monitoring
url: data_jobs/
pre: data-jobs-monitoring
diff --git a/content/en/data_streams/_index.md b/content/en/data_streams/_index.md
index 61480795e0147..4bf784d138eb5 100644
--- a/content/en/data_streams/_index.md
+++ b/content/en/data_streams/_index.md
@@ -51,6 +51,10 @@ For installation instructions and lists of supported technologies, choose your l
{{< partial name="data_streams/setup-languages.html" >}}
+or choose your technology to see what languages and libraries are supported:
+
+{{< partial name="data_streams/setup-technologies.html" >}}
+
## Explore Data Streams Monitoring
@@ -86,7 +90,7 @@ Alternatively, click a service to open a detailed side panel and view the **Path
Slowdowns caused by high consumer lag or stale messages can lead to cascading failures and increase downtime. With out-of-the-box alerts, you can pinpoint where bottlenecks occur in your pipelines and respond to them right away. For supplementary metrics, Datadog provides additional integrations for message queue technologies like [Kafka][4] and [SQS][5].
-Through Data Stream Monitoring's out-of-the-box monitor templates, you can setup monitors on metrics like consumer lag, throughput, and latency in one click.
+Through Data Stream Monitoring's out-of-the-box monitor templates, you can setup monitors on metrics like consumer lag, throughput, and latency in one click.
{{< img src="data_streams/add_monitors_and_synthetic_tests.png" alt="Datadog Data Streams Monitoring Monitor Templates" style="width:100%;" caption="Click 'Add Monitors and Synthetic Tests' to view monitor templates" >}}
@@ -98,7 +102,7 @@ Click on the **Throughput** tab on any service or queue in Data Streams Monitori
By filtering to a single Kafka, RabbitMQ, or Amazon SQS cluster, you can detect changes in incoming or outgoing traffic for all detected topics or queues running on that cluster:
-### Quickly pivot to identify root causes in infrastructure, logs, or traces
+### Quickly pivot to identify root causes in infrastructure, logs, or traces
Datadog automatically links the infrastructure powering your services and related logs through [Unified Service Tagging][3], so you can easily localize bottlenecks. Click the **Infra**, **Logs** or **Traces** tabs to further troubleshoot why pathway latency or consumer lag has increased.
diff --git a/content/en/data_streams/dotnet.md b/content/en/data_streams/dotnet.md
index de1d2b2909a2b..b2dc1796bbe9f 100644
--- a/content/en/data_streams/dotnet.md
+++ b/content/en/data_streams/dotnet.md
@@ -32,35 +32,26 @@ environment:
- DD_DATA_STREAMS_ENABLED: "true"
```
-### Monitoring Kafka Pipelines
-Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
+{{% data_streams/monitoring-kafka-pipelines %}}
-### Monitoring SQS pipelines
-Data Streams Monitoring uses one [message attribute][2] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
+{{% data_streams/monitoring-sqs-pipelines %}}
-{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
+{{% data_streams/monitoring-rabbitmq-pipelines %}}
-### Monitoring SNS-to-SQS pipelines
-To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][12].
+{{% data_streams/monitoring-sns-to-sqs-pipelines %}}
-### Monitoring Azure Service Bus
-
-Setting up Data Streams Monitoring for Azure Service Bus applications requires additional configuration for the instrumented application.
-
-1. Either set the environment variable `AZURE_EXPERIMENTAL_ENABLE_ACTIVITY_SOURCE` to `true`, or in your application code set the `Azure.Experimental.EnableActivitySource` context switch to `true`. This instructs the Azure Service Bus library to generate tracing information. See [Azure SDK documentation][11] for more details.
-2. Set the `DD_TRACE_OTEL_ENABLED` environment variable to `true`. This instructs the .NET auto-instrumentation to listen to the tracing information generated by the Azure Service Bus Library and enables the inject and extract operations required for Data Streams Monitoring.
+{{% data_streams/monitoring-azure-service-bus %}}
### Monitoring connectors
#### Confluent Cloud connectors
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
## Further reading
{{< partial name="whats-next/whats-next.html" >}}
[1]: /agent
-[2]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
[3]: https://www.nuget.org/packages/Confluent.Kafka
[4]: https://www.nuget.org/packages/RabbitMQ.Client
[5]: https://www.nuget.org/packages/AWSSDK.SQS
@@ -70,4 +61,3 @@ Setting up Data Streams Monitoring for Azure Service Bus applications requires a
[9]: #monitoring-azure-service-bus
[10]: https://www.nuget.org/packages/Azure.Messaging.ServiceBus
[11]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Diagnostics.md#enabling-experimental-tracing-features
-[12]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
diff --git a/content/en/data_streams/go.md b/content/en/data_streams/go.md
index e0266293ed58b..07b7213a8da43 100644
--- a/content/en/data_streams/go.md
+++ b/content/en/data_streams/go.md
@@ -27,8 +27,9 @@ To start with Data Streams Monitoring, you need recent versions of the Datadog A
### Installation
-### Monitoring Kafka Pipelines
-Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
+{{% data_streams/monitoring-kafka-pipelines %}}
+
+{{% data_streams/monitoring-rabbitmq-pipelines %}}
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
@@ -128,7 +129,7 @@ if ok {
### Monitoring connectors
#### Confluent Cloud connectors
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
## Further reading
@@ -136,7 +137,7 @@ if ok {
[1]: /agent/
[2]: https://github.com/DataDog/dd-trace-go
-[3]: https://docs.datadoghq.com/tracing/trace_collection/library_config/go/
+[3]: /tracing/trace_collection/library_config/go/
[4]: https://datadoghq.dev/orchestrion/
[5]: https://datadoghq.dev/orchestrion/docs/getting-started/
[6]: https://github.com/DataDog/dd-trace-go/blob/main/datastreams/propagation.go#L37
diff --git a/content/en/data_streams/java.md b/content/en/data_streams/java.md
index a29a6363359b2..e1249861466f7 100644
--- a/content/en/data_streams/java.md
+++ b/content/en/data_streams/java.md
@@ -70,8 +70,9 @@ Use Datadog's Java tracer, [`dd-trace-java`][6], to collect information from you
1. [Add the `dd-java-agent.jar` file][7] to your Kafka Connect workers. Ensure that you are using `dd-trace-java` [v1.44+][8].
1. Modify your Java options to include the Datadog Java tracer on your worker nodes. For example, on Strimzi, modify `STRIMZI_JAVA_OPTS` to add `-javaagent:/path/to/dd-java-agent.jar`.
-### Monitoring SQS pipelines
-Data Streams Monitoring uses one [message attribute][3] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
+{{% data_streams/monitoring-sqs-pipelines %}}
+
+{{% data_streams/monitoring-rabbitmq-pipelines %}}
{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
@@ -100,8 +101,7 @@ Enable [Amazon SNS raw message delivery][1].
{{% /tab %}}
{{< /tabs >}}
-### Monitoring Kinesis pipelines
-There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.
+{{% data_streams/monitoring-kinesis-pipelines %}}
### Manual instrumentation
Data Streams Monitoring propagates context through message headers. If you are using a message queue technology that is not supported by DSM, a technology without headers (such as Kinesis), or Lambdas, use [manual instrumentation to set up DSM][5].
@@ -109,7 +109,7 @@ Data Streams Monitoring propagates context through message headers. If you are u
### Monitoring connectors
#### Confluent Cloud connectors
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
#### Self-hosted Kafka connectors
@@ -123,7 +123,6 @@ Data Streams Monitoring can collect information from your self-hosted Kafka conn
[1]: /agent
[2]: /tracing/trace_collection/dd_libraries/java/
-[3]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
[4]: /agent/remote_config/?tab=configurationyamlfile#enabling-remote-configuration
[5]: /data_streams/manual_instrumentation/?tab=java
[6]: https://github.com/DataDog/dd-trace-java
diff --git a/content/en/data_streams/nodejs.md b/content/en/data_streams/nodejs.md
index 238eb2acad16e..759c7e7645e19 100644
--- a/content/en/data_streams/nodejs.md
+++ b/content/en/data_streams/nodejs.md
@@ -36,19 +36,15 @@ environment:
- DD_DATA_STREAMS_ENABLED: "true"
```
-### Monitoring Kafka Pipelines
-Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
+{{% data_streams/monitoring-kafka-pipelines %}}
-### Monitoring SQS pipelines
-Data Streams Monitoring uses one [message attribute][4] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
+{{% data_streams/monitoring-sqs-pipelines %}}
-{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
+{{% data_streams/monitoring-rabbitmq-pipelines %}}
-### Monitoring SNS-to-SQS pipelines
-To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][8].
+{{% data_streams/monitoring-sns-to-sqs-pipelines %}}
-### Monitoring Kinesis pipelines
-There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.
+{{% data_streams/monitoring-kinesis-pipelines %}}
### Manual instrumentation
Data Streams Monitoring propagates context through message headers. If you are using a message queue technology that is not supported by DSM, a technology without headers (such as Kinesis), or Lambdas, use [manual instrumentation to set up DSM][7].
@@ -56,7 +52,7 @@ Data Streams Monitoring propagates context through message headers. If you are u
### Monitoring connectors
#### Confluent Cloud connectors
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
## Further reading
@@ -65,8 +61,6 @@ Data Streams Monitoring propagates context through message headers. If you are u
[1]: /agent
[2]: /tracing/trace_collection/dd_libraries/nodejs
[3]: https://pypi.org/project/confluent-kafka/
-[4]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
[5]: https://www.npmjs.com/package/amqplib
[6]: https://www.npmjs.com/package/rhea
[7]: /data_streams/manual_instrumentation/?tab=nodejs
-[8]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
diff --git a/content/en/data_streams/python.md b/content/en/data_streams/python.md
index 1630ba6c10b2c..4b03a4712bb23 100644
--- a/content/en/data_streams/python.md
+++ b/content/en/data_streams/python.md
@@ -36,19 +36,15 @@ environment:
- DD_DATA_STREAMS_ENABLED: "true"
```
-### Monitoring Kafka Pipelines
-Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
+{{% data_streams/monitoring-kafka-pipelines %}}
-### Monitoring SQS Pipelines
-Data Streams Monitoring uses one [message attribute][4] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
+{{% data_streams/monitoring-sqs-pipelines %}}
-{{% data-streams-monitoring/monitoring-rabbitmq-pipelines %}}
+{{% data_streams/monitoring-rabbitmq-pipelines %}}
-### Monitoring Kinesis pipelines
-There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.
+{{% data_streams/monitoring-kinesis-pipelines %}}
-### Monitoring SNS-to-SQS pipelines
-To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][7].
+{{% data_streams/monitoring-sns-to-sqs-pipelines %}}
### Manual instrumentation
Data Streams Monitoring propagates context through message headers. If you are using a message queue technology that is not supported by DSM, a technology without headers (such as Kinesis), or Lambdas, use [manual instrumentation to set up DSM][6].
@@ -56,7 +52,7 @@ Data Streams Monitoring propagates context through message headers. If you are u
### Monitoring connectors
#### Confluent Cloud connectors
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
## Further reading
@@ -65,7 +61,5 @@ Data Streams Monitoring propagates context through message headers. If you are u
[1]: /agent
[2]: /tracing/trace_collection/dd_libraries/python
[3]: https://pypi.org/project/confluent-kafka/
-[4]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
[5]: https://pypi.org/project/kombu/
[6]: /data_streams/manual_instrumentation/?tab=python
-[7]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
diff --git a/content/en/data_streams/technologies/azure_service_bus.md b/content/en/data_streams/technologies/azure_service_bus.md
new file mode 100644
index 0000000000000..7ed2042ef014d
--- /dev/null
+++ b/content/en/data_streams/technologies/azure_service_bus.md
@@ -0,0 +1,32 @@
+---
+title: Azure Service Bus for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+{{% data_streams/monitoring-azure-service-bus %}}
+
+### Support for Azure Service Bus in Data Streams Monitoring
+
+
+
+
+ Language |
+ Library |
+ Minimal tracer version |
+ Recommended tracer version |
+
+
+
+
+ .NET |
+ Azure.Messaging.ServiceBus |
+ 2.53.0 |
+ 2.53.0 or later |
+
+
+
+
+[1]: /agent
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/google_pubsub.md b/content/en/data_streams/technologies/google_pubsub.md
new file mode 100644
index 0000000000000..842b40e14af29
--- /dev/null
+++ b/content/en/data_streams/technologies/google_pubsub.md
@@ -0,0 +1,36 @@
+---
+title: Google Pub/Sub for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+### Support for Google Pub/Sub in Data Streams Monitoring
+
+
+
+[1]: /agent
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/ibm_mq.md b/content/en/data_streams/technologies/ibm_mq.md
new file mode 100644
index 0000000000000..0800488e23182
--- /dev/null
+++ b/content/en/data_streams/technologies/ibm_mq.md
@@ -0,0 +1,30 @@
+---
+title: IBM MQ for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+### Support for IBM MQ in Data Streams Monitoring
+
+
+
+
+ Language |
+ Library |
+ Minimal tracer version |
+ Recommended tracer version |
+
+
+
+
+ .NET |
+ IBMMQDotnetClient |
+ 2.49.0 |
+ 2.49.0 or later |
+
+
+
+
+[1]: /agent
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/kafka.md b/content/en/data_streams/technologies/kafka.md
new file mode 100644
index 0000000000000..f42e88e6c699d
--- /dev/null
+++ b/content/en/data_streams/technologies/kafka.md
@@ -0,0 +1,67 @@
+---
+title: Kafka for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+{{% data_streams/monitoring-kafka-pipelines %}}
+
+### Support for Kafka in Data Streams Monitoring
+
+
+
+
+### Note
+- [Kafka Streams][2] is partially supported for Java, and can lead to latency measurements being missed in many cases
+
+
+[1]: /agent
+[2]: https://kafka.apache.org/documentation/streams/
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/kinesis.md b/content/en/data_streams/technologies/kinesis.md
new file mode 100644
index 0000000000000..9540f8753338f
--- /dev/null
+++ b/content/en/data_streams/technologies/kinesis.md
@@ -0,0 +1,55 @@
+---
+title: Amazon Kinesis for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+{{% data_streams/monitoring-kinesis-pipelines %}}
+
+### Support for Amazon Kinesis in Data Streams Monitoring
+
+
+
+[1]: /agent
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/rabbitmq.md b/content/en/data_streams/technologies/rabbitmq.md
new file mode 100644
index 0000000000000..0252d9dabe770
--- /dev/null
+++ b/content/en/data_streams/technologies/rabbitmq.md
@@ -0,0 +1,48 @@
+---
+title: RabbitMQ for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+### Support for RabbitMQ in Data Streams Monitoring
+
+
+
+
+ Language |
+ Library |
+ Minimal tracer version |
+ Recommended tracer version |
+
+
+
+
+ Java |
+ amqp-client |
+ 1.9.0 |
+ 1.42.2 or later |
+
+
+ Node.js |
+ amqplib |
+ 3.48.0 or 4.27.0 or 5.3.0 |
+ 5.3.0 or later |
+
+
+ Python |
+ Kombu |
+ 2.6.0 |
+ 2.6.0 or later |
+
+
+ .NET |
+ RabbitMQ.Client |
+ 2.28.0 |
+ 2.37.0 or later |
+
+
+
+
+[1]: /agent
\ No newline at end of file
diff --git a/content/en/data_streams/technologies/sns.md b/content/en/data_streams/technologies/sns.md
new file mode 100644
index 0000000000000..db8b358473199
--- /dev/null
+++ b/content/en/data_streams/technologies/sns.md
@@ -0,0 +1,56 @@
+---
+title: Amazon SNS for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+{{% data_streams/monitoring-sns-to-sqs-pipelines %}}
+**Note:** Java requires additional setup: [read more](/data_streams/java/?tab=environmentvariables#monitoring-sns-to-sqs-pipelines)
+
+### Support for Amazon SNS in Data Streams Monitoring
+
+
+
+
+ Language |
+ Library |
+ Minimal tracer version |
+ Recommended tracer version |
+
+
+
+
+ Java |
+ SNS (v1) |
+ 1.31.0 |
+ 1.42.2 or later |
+
+
+ SNS (v2) |
+ 1.31.0 |
+ 1.42.2 or later |
+
+
+ Node.js |
+ client-sns |
+ 3.47.0 or 4.26.0 or 5.2.0 |
+ 5.18.0 or later |
+
+
+ Python |
+ Botocore |
+ 1.20.0 |
+ 2.8.0 or later |
+
+
+ .NET |
+ Amazon SNS SDK |
+ 3.6.0 |
+ 3.6.0 or later |
+
+
+
+
+[1]: /agent
diff --git a/content/en/data_streams/technologies/sqs.md b/content/en/data_streams/technologies/sqs.md
new file mode 100644
index 0000000000000..7d57e65496e0e
--- /dev/null
+++ b/content/en/data_streams/technologies/sqs.md
@@ -0,0 +1,55 @@
+---
+title: Amazon SQS for Data Streams Monitoring
+---
+
+### Prerequisites
+
+* [Datadog Agent v7.34.0 or later][1]
+
+{{% data_streams/monitoring-sqs-pipelines %}}
+
+### Support for Amazon SQS in Data Streams Monitoring
+
+
+
+[1]: /agent
diff --git a/content/es/data_streams/go.md b/content/es/data_streams/go.md
index 65eb17fea9535..e4c5d12f1fd10 100644
--- a/content/es/data_streams/go.md
+++ b/content/es/data_streams/go.md
@@ -117,7 +117,7 @@ si ok {
```
[1]: /es/agent/
[2]: https://github.com/DataDog/dd-trace-go
-[3]: https://docs.datadoghq.com/es/tracing/trace_collection/library_config/go/
+[3]: /es/tracing/trace_collection/library_config/go/
[4]: https://datadoghq.dev/orchestrion/
[5]: https://datadoghq.dev/orchestrion/docs/getting-started/
[6]: https://github.com/DataDog/dd-trace-go/blob/main/datastreams/propagation.go#L37
diff --git a/content/fr/data_streams/go.md b/content/fr/data_streams/go.md
index d9491cc187fff..970a975ba41ce 100644
--- a/content/fr/data_streams/go.md
+++ b/content/fr/data_streams/go.md
@@ -86,4 +86,4 @@ if ok {
[1]: /fr/agent
[2]: https://github.com/DataDog/dd-trace-go
-[3]: https://docs.datadoghq.com/fr/tracing/trace_collection/library_config/go/
\ No newline at end of file
+[3]: /fr/tracing/trace_collection/library_config/go/
\ No newline at end of file
diff --git a/content/ja/data_streams/go.md b/content/ja/data_streams/go.md
index ff9fb3e5b3abb..8723654ef21f9 100644
--- a/content/ja/data_streams/go.md
+++ b/content/ja/data_streams/go.md
@@ -117,7 +117,7 @@ if ok {
```
[1]: /ja/agent/
[2]: https://github.com/DataDog/dd-trace-go
-[3]: https://docs.datadoghq.com/ja/tracing/trace_collection/library_config/go/
+[3]: /ja/tracing/trace_collection/library_config/go/
[4]: https://datadoghq.dev/orchestrion/
[5]: https://datadoghq.dev/orchestrion/docs/getting-started/
[6]: https://github.com/DataDog/dd-trace-go/blob/main/datastreams/propagation.go#L37
diff --git a/content/ko/data_streams/go.md b/content/ko/data_streams/go.md
index c42f509816345..062d72607dee8 100644
--- a/content/ko/data_streams/go.md
+++ b/content/ko/data_streams/go.md
@@ -123,7 +123,7 @@ if ok {
### 모니터링 커넥터
#### Confluent Cloud 커넥터
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
## 참고 자료
@@ -131,7 +131,7 @@ if ok {
[1]: /ko/agent/
[2]: https://github.com/DataDog/dd-trace-go
-[3]: https://docs.datadoghq.com/ko/tracing/trace_collection/library_config/go/
+[3]: /ko/tracing/trace_collection/library_config/go/
[4]: https://datadoghq.dev/orchestrion/
[5]: https://datadoghq.dev/orchestrion/docs/getting-started/
[6]: https://github.com/DataDog/dd-trace-go/blob/main/datastreams/propagation.go#L37
diff --git a/content/ko/data_streams/java.md b/content/ko/data_streams/java.md
index 31f2974b3942d..4a75a0fd9daa8 100644
--- a/content/ko/data_streams/java.md
+++ b/content/ko/data_streams/java.md
@@ -107,7 +107,7 @@ Kinesis에는 컨텍스트를 전파하고 Kinesis 스트림을 통해 메시지
### 모니터링 커넥터
#### Confluent Cloud 커넥터
-{{% dsm_confluent_connectors %}}
+{{% data_streams/dsm-confluent-connectors %}}
#### 셀프호스팅 Kafka 커넥터
diff --git a/layouts/partials/data_streams/setup-technologies.html b/layouts/partials/data_streams/setup-technologies.html
new file mode 100644
index 0000000000000..e3563aeb92802
--- /dev/null
+++ b/layouts/partials/data_streams/setup-technologies.html
@@ -0,0 +1,71 @@
+{{ $dot := . }}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/layouts/shortcodes/dsm_confluent_connectors.md b/layouts/shortcodes/data_streams/dsm-confluent-connectors.md
similarity index 100%
rename from layouts/shortcodes/dsm_confluent_connectors.md
rename to layouts/shortcodes/data_streams/dsm-confluent-connectors.md
diff --git a/layouts/shortcodes/data_streams/monitoring-azure-service-bus.md b/layouts/shortcodes/data_streams/monitoring-azure-service-bus.md
new file mode 100644
index 0000000000000..ef8bf10b57854
--- /dev/null
+++ b/layouts/shortcodes/data_streams/monitoring-azure-service-bus.md
@@ -0,0 +1,8 @@
+### Monitoring Azure Service Bus
+
+Setting up Data Streams Monitoring for Azure Service Bus applications requires additional configuration for the instrumented application.
+
+1. Either set the environment variable `AZURE_EXPERIMENTAL_ENABLE_ACTIVITY_SOURCE` to `true`, or in your application code set the `Azure.Experimental.EnableActivitySource` context switch to `true`. This instructs the Azure Service Bus library to generate tracing information. See [Azure SDK documentation][1] for more details.
+2. Set the `DD_TRACE_OTEL_ENABLED` environment variable to `true`. This instructs the .NET auto-instrumentation to listen to the tracing information generated by the Azure Service Bus Library and enables the inject and extract operations required for Data Streams Monitoring.
+
+[1]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Diagnostics.md#enabling-experimental-tracing-features
\ No newline at end of file
diff --git a/layouts/shortcodes/data_streams/monitoring-kafka-pipelines.md b/layouts/shortcodes/data_streams/monitoring-kafka-pipelines.md
new file mode 100644
index 0000000000000..f8a60fb48319c
--- /dev/null
+++ b/layouts/shortcodes/data_streams/monitoring-kafka-pipelines.md
@@ -0,0 +1,2 @@
+### Monitoring Kafka Pipelines
+Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.
\ No newline at end of file
diff --git a/layouts/shortcodes/data_streams/monitoring-kinesis-pipelines.md b/layouts/shortcodes/data_streams/monitoring-kinesis-pipelines.md
new file mode 100644
index 0000000000000..957b1a4341533
--- /dev/null
+++ b/layouts/shortcodes/data_streams/monitoring-kinesis-pipelines.md
@@ -0,0 +1,2 @@
+### Monitoring Kinesis pipelines
+There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.
\ No newline at end of file
diff --git a/layouts/shortcodes/data-streams-monitoring/monitoring-rabbitmq-pipelines.md b/layouts/shortcodes/data_streams/monitoring-rabbitmq-pipelines.md
similarity index 100%
rename from layouts/shortcodes/data-streams-monitoring/monitoring-rabbitmq-pipelines.md
rename to layouts/shortcodes/data_streams/monitoring-rabbitmq-pipelines.md
diff --git a/layouts/shortcodes/data_streams/monitoring-sns-to-sqs-pipelines.md b/layouts/shortcodes/data_streams/monitoring-sns-to-sqs-pipelines.md
new file mode 100644
index 0000000000000..2415515587b87
--- /dev/null
+++ b/layouts/shortcodes/data_streams/monitoring-sns-to-sqs-pipelines.md
@@ -0,0 +1,4 @@
+### Monitoring SNS-to-SQS pipelines
+To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery][1].
+
+[1]: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
\ No newline at end of file
diff --git a/layouts/shortcodes/data_streams/monitoring-sqs-pipelines.md b/layouts/shortcodes/data_streams/monitoring-sqs-pipelines.md
new file mode 100644
index 0000000000000..9b9d448006e55
--- /dev/null
+++ b/layouts/shortcodes/data_streams/monitoring-sqs-pipelines.md
@@ -0,0 +1,4 @@
+### Monitoring SQS pipelines
+Data Streams Monitoring uses one [message attribute][1] to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.
+
+[1]: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
\ No newline at end of file