Skip to content

Commit

Permalink
DOCS-9341: adding languages for manual instrumentation (#25941)
Browse files Browse the repository at this point in the history
* adding languages for manual instrumentation

* formatting
  • Loading branch information
cswatt authored Oct 28, 2024
1 parent db22b1a commit 1fc1df4
Showing 1 changed file with 49 additions and 7 deletions.
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
title: Set up Data Streams Monitoring for Java through Manual Instrumentation
private: true
aliases:
- /data_streams/java_manual_instrumentation
further_reading:
- link: '/integrations/kafka/'
tag: 'Documentation'
Expand All @@ -10,9 +12,12 @@ further_reading:
text: 'Service Catalog'
---

<div class="alert alert-info">Manual instrumentation is available for Java. For a list of technologies supported in Java today, see <a href="/data_streams/#setup">Data Streams Monitoring Setup</a>.<br /><br />If you're interested in manual instrumentation for additional languages, reach out to [email protected].</div>
<div class="alert alert-info">Manual instrumentation is available for Java, Node.js, and Python. <br /><br />If you're interested in manual instrumentation for additional languages, reach out to [email protected].</div>

Data Streams Monitoring propagates context through message headers. If you use a message queue technology that is not yet supported by DSM, a technology without headers (such as Kinesis), or lambdas, use manual instrumentation to set up Data Streams Monitoring.
Data Streams Monitoring (DSM) propagates context through message headers. Use manual instrumentation to set up DSM if you are using:
- a message queue technology that is not supported by DSM
- a message queue technology without headers, such as Kinesis, or
- Lambdas

### Manual instrumentation installation

Expand All @@ -24,19 +29,20 @@ kinesis, kafka, rabbitmq, sqs, sns
{{< /code-block >}}

3. Call the Data Streams Monitoring checkpoints when messages are produced and when they are consumed, as shown in the example code below:

{{< code-block lang="shell" >}}
{{< tabs >}}
{{% tab "Java" %}}
{{< code-block lang="java" >}}
import datadog.trace.api.experimental.*;
​​
Carrier headersAdapter = new Carrier(headers);
# Before calling database PUT
// Before calling database PUT
DataStreamsCheckpointer.get().setProduceCheckpoint("<database-type>", "<topic-name>", headersAdapter);
# After calling database GET
// After calling database GET
DataStreamsCheckpointer.get().setConsumeCheckpoint("<database-type>", "<topic-name>", headersAdapter);

# Replace Headers with whatever you're using to pass the context
// Replace headers with whatever you're using to pass the context
private class Carrier implements DataStreamsContextCarrier {
private Headers headers;

Expand All @@ -54,7 +60,43 @@ private class Carrier implements DataStreamsContextCarrier {
}
}
{{< /code-block >}}
{{% /tab %}}
{{% tab "Node.js" %}}
{{< code-block lang="javascript" >}}
const tracer = require('dd-trace').init({})

// before calling produce
const headers = {}
tracer.dataStreamsCheckpointer.setProduceCheckpoint(
"<datastream-type>", "<queue-name>", headers
)

// after calling consume
tracer.dataStreamsCheckpointer.setConsumeCheckpoint(
"<datastream-type>", "<queue-name>", headers
)

{{< /code-block >}}
{{% /tab %}}
{{% tab "Python" %}}
{{< code-block lang="python" >}}
from ddtrace.data_streams import set_consume_checkpoint
from ddtrace.data_streams import set_produce_checkpoint

# before calling produce
headers = {}
set_produce_checkpoint(
"<datastream-type>", "<datastream-name>", headers.setdefault
)

# after calling consume
set_consume_checkpoint(
"<datastream-type>", "<datastream-name>", headers.get
)

{{< /code-block >}}
{{% /tab %}}
{{< /tabs >}}
## Further Reading

{{< partial name="whats-next/whats-next.html" >}}
Expand Down

0 comments on commit 1fc1df4

Please sign in to comment.