Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-8441 LLM Observability Documentation Navigation Update #24337

Merged
merged 14 commits into from
Aug 13, 2024
66 changes: 33 additions & 33 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3718,49 +3718,49 @@ menu:
- name: LLM Observability
url: llm_observability/
pre: llm-observability
identifier: llm_observability
identifier: llm_obs
parent: ai_observability_heading
weight: 10000
- name: Core Concepts
url: llm_observability/core_concepts/
parent: llm_observability
identifier: tracing_llm_obs_core_concepts
weight: 1
- name: Quickstart
url: llm_observability/quickstart/
parent: llm_observability
identifier: tracing_llm_obs_quickstart
parent: llm_obs
identifier: llm_obs_quickstart
weight: 1
- name: Terms and Concepts
url: llm_observability/terms/
parent: llm_obs
identifier: llm_obs_terms
weight: 2
- name: Trace an LLM Application
url: llm_observability/trace_an_llm_application/
parent: llm_observability
identifier: tracing_llm_obs_trace_an_application
- name: Setup
url: llm_observability/setup/
parent: llm_obs
identifier: llm_obs_setup
weight: 3
- name: Span Kinds
url: llm_observability/span_kinds/
parent: llm_observability
identifier: tracing_llm_obs_span_kinds
weight: 4
- name: Auto Instrumentation
url: llm_observability/auto_instrumentation/
parent: llm_observability
identifier: tracing_llm_obs_auto_instrumentation
weight: 5
- name: SDK
url: llm_observability/sdk/
parent: llm_observability
identifier: tracing_llm_obs_sdk
weight: 6
url: llm_observability/setup/sdk/
parent: llm_obs_setup
identifier: llm_obs_setup_sdk
weight: 301
- name: Auto Instrumentation
url: llm_observability/setup/auto_instrumentation
parent: llm_obs_setup
identifier: llm_obs_setup_auto_instrumentation
weight: 302
- name: API
url: llm_observability/api/
parent: llm_observability
identifier: tracing_llm_obs_api
weight: 7
url: llm_observability/setup/api/
parent: llm_obs_setup
identifier: llm_obs_setup_api
weight: 303
- name: Submit Evaluations
url: llm_observability/submit_evaluations/
parent: llm_observability
identifier: tracing_llm_obs_submit_evaluations
weight: 8
parent: llm_obs
identifier: llm_obs_submit_evaluations
weight: 4
- name: Guides
url: llm_observability/guide/
parent: llm_obs
identifier: llm_obs_guide
weight: 5
- name: CI Visibility
url: continuous_integration/
pre: ci
Expand Down
39 changes: 19 additions & 20 deletions content/en/llm_observability/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,56 +17,55 @@ LLM Observability is not available in the US1-FED site.

## Overview

{{< img src="llm_observability/llm-observability-landing.png" alt="LLM Observability overview page with record of all prompt-response pair traces" style="width:100%;" >}}

With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.

Each request fulfilled by your application is represented as a trace on the [LLM Observability traces page][2] in Datadog. A trace can represent:
Each request fulfilled by your application is represented as a trace on the [**LLM Observability** page][1] in Datadog. A trace can represent:

- An individual LLM inference, including tokens, error information, and latency
- A predetermined LLM workflow, which is a grouping of LLM calls and their contextual operations, such as tool calls or preprocessing steps
- A dynamic LLM workflow executed by an LLM agent

Each trace contains spans representing each choice made by an agent or each step of a given workflow. A given trace can also include input and output, latency, privacy issues, errors, and more.

You can instrument your application with the LLM Observability SDK for Python, or by calling the LLM Observability API.

## Getting started

To get started with LLM Observability, you can build a simple example with the [Quickstart][3], or follow [the guide for instrumenting your LLM application][4].
{{< img src="llm_observability/llm-observability-landing.png" alt="LLM Observability overview page with record of all prompt-response pair traces" style="width:100%;" >}}

## Explore LLM Observability
Each trace contains spans representing each choice made by an agent or each step of a given workflow. A given trace can also include input and output, latency, privacy issues, errors, and more. For more information, see [Terms and Concepts][2].

### Troubleshoot with end-to-end tracing
## Troubleshoot with end-to-end tracing

View every step of your LLM application chains and calls to pinpoint problematic requests and identify the root cause of errors.

{{< img src="llm_observability/llm-observability-overview.png" alt="An LLM Observability trace displaying each span of a request" style="width:100%;" >}}

### Monitor operational metrics and optimize cost
## Monitor operational metrics and optimize cost

Monitor the throughput, latency, and token usage trends for all your LLM applications.

{{< img src="llm_observability/dashboard.png" alt="The out-of-the-box LLM Observability dashboard" style="width:100%;" >}}

### Evaluate the quality and effectiveness of your LLM applications
## Evaluate the quality and effectiveness of your LLM applications

Identify problematic clusters and monitor the quality of responses over time with topical clustering and checks like sentiment, failure to answer, and so on.

{{< img src="llm_observability/clusters-page.png" alt="The clusters page in LLM Observability" style="width:100%;" >}}

### Safeguard sensitive data and identify malicious users
## Safeguard sensitive data and identify malicious users

Automatically scan and redact any sensitive data in your AI applications and identify prompt injections.

{{< img src="llm_observability/prompt-injection.png" alt="An example of a prompt-injection attempt" style="width:100%;" >}}

By using LLM Observability, you acknowledge that Datadog is authorized to share your Company's data with OpenAI LLC for the purpose of providing and improving LLM Observability. OpenAI will not use your data for training or tuning purposes. If you have any questions or want to opt out of features that depend on OpenAI, reach out to your account representative.

## Further reading
## Ready to start?

See the [Setup documentation][5] for instructions on instrumenting your LLM application or follow the [Trace an LLM Application guide][6] to generate a trace using the [LLM Observability SDK for Python][3].

## Further Reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: /llm_observability/spans/
[2]: https://app.datadoghq.com/llm/traces
[3]: /llm_observability/quickstart
[4]: /llm_observability/trace_an_llm_application
[1]: https://app.datadoghq.com/llm/traces
[2]: /llm_observability/terms
[3]: /llm_observability/setup/sdk
[4]: /llm_observability/setup/api
[5]: /llm_observability/setup
[6]: /llm_observability/quickstart
91 changes: 0 additions & 91 deletions content/en/llm_observability/core_concepts.md

This file was deleted.

15 changes: 15 additions & 0 deletions content/en/llm_observability/guide/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: LLM Observability Guides
private: true
disable_toc: true
cascade:
algolia:
rank: 20
category: Guide
subcategory: LLM Observability Guides
---

{{< whatsnext desc="LLM Observability Guides:" >}}
{{< nextlink href="/llm_observability/quickstart" >}}Trace an LLM Application{{< /nextlink >}}
{{< nextlink href="/llm_observability/submit_evaluations" >}}Submit Evaluations{{< /nextlink >}}
{{< /whatsnext >}}
Loading
Loading