Skip to content

Commit

Permalink
Merge branch 'grafana:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
callumau authored Dec 7, 2024
2 parents 0ef6be7 + ac1fae4 commit d6f61c4
Show file tree
Hide file tree
Showing 217 changed files with 5,574 additions and 1,527 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0
with:
image-ref: 'grafana/alloy-dev:latest'
format: 'template'
Expand Down
74 changes: 64 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ Main (unreleased)

- Add `otelcol.receiver.solace` component to receive traces from a Solace broker. (@wildum)

- Add `otelcol.exporter.syslog` component to export logs in syslog format (@dehaansa)

- (_Experimental_) Add a `database_observability.mysql` component to collect mysql performance data. (@cristiangreco & @matthewnolf)

- Add `otelcol.receiver.influxdb` to convert influx metric into OTEL. (@EHSchmitt4395)

### Enhancements

- Add second metrics sample to the support bundle to provide delta information (@dehaansa)
Expand All @@ -30,23 +36,67 @@ Main (unreleased)

- Add relevant golang environment variables to the support bundle (@dehaansa)

### Bugfixes
- Update mysqld_exporter from v0.15.0 to v0.16.0 (including 2ef168bf6), most notable changes: (@cristiangreco)
- Support MySQL 8.4 replicas syntax
- Fetch lock time and cpu time from performance schema
- Fix fetching tmpTables vs tmpDiskTables from performance_schema
- Skip SPACE_TYPE column for MariaDB >=10.5
- Fixed parsing of timestamps with non-zero padded days
- Fix auto_increment metric collection errors caused by using collation in INFORMATION_SCHEMA searches
- Change processlist query to support ONLY_FULL_GROUP_BY sql_mode
- Add perf_schema quantile columns to collector

- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa)
- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar)
- Add three new stdlib functions to_base64, from_URLbase64 and to_URLbase64 (@ravishankar15)

### Bugfixes

- Fixed issue with reloading configuration and prometheus metrics duplication in `prometheus.write.queue`. (@mattdurham)

- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum)
- Updated `prometheus.write.queue` to fix issue with TTL comparing different scales of time. (@mattdurham)

- Fixed a race condition that could lead to a deadlock when using `import` statements, which could lead to a memory leak on `/metrics` endpoint of an Alloy instance. (@thampiotr)
- Fixed an issue in the `prometheus.operator.servicemonitors`, `prometheus.operator.podmonitors` and `prometheus.operator.probes` to support capitalized actions. (@QuentinBisson)

- Fixed an issue where the `otelcol.processor.interval` could not be used because the debug metrics were not set to default. (@wildum)

### Other changes

- Change the stability of the `livedebugging` feature from "experimental" to "generally available". (@wildum)

- Use Go 1.23.3 for builds. (@mattdurham)

v1.5.1
-----------------

### Enhancements

- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr)

- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig)

- For sharding targets during clustering, `loki.source.podlogs` now only takes into account some labels. (@ptodev)

### Bugfixes

- Fixed an issue in the `pyroscope.write` component to prevent TLS connection churn to Pyroscope when the `pyroscope.receive_http` clients don't request keepalive (@madaraszg-tulip)

- Fixed an issue in the `pyroscope.write` component with multiple endpoints not working correctly for forwarding profiles from `pyroscope.receive_http` (@madaraszg-tulip)

- Fixed a few race conditions that could lead to a deadlock when using `import` statements, which could lead to a memory leak on `/metrics` endpoint of an Alloy instance. (@thampiotr)

- Fix a race condition where the ui service was dependent on starting after the remotecfg service, which is not guaranteed. (@dehaansa & @erikbaranowski)

- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar)

- `loki.source.podlogs`: Fixed a bug which prevented clustering from working and caused duplicate logs to be sent.
The bug only happened when no `selector` or `namespace_selector` blocks were specified in the Alloy configuration. (@ptodev)

- Fixed an issue in the `pyroscope.write` component to allow slashes in application names in the same way it is done in the Pyroscope push API (@marcsanmi)

- Fixed a crash when updating the configuration of `remote.http`. (@kinolaev)

- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum)

- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa)

v1.5.0
-----------------
Expand Down Expand Up @@ -95,7 +145,7 @@ v1.5.0
- Add support for relative paths to `import.file`. This new functionality allows users to use `import.file` blocks in modules
imported via `import.git` and other `import.file`. (@wildum)

- `prometheus.exporter.cloudwatch`: The `discovery` block now has a `recently_active_only` configuration attribute
- `prometheus.exporter.cloudwatch`: The `discovery` block now has a `recently_active_only` configuration attribute
to return only metrics which have been active in the last 3 hours.

- Add Prometheus bearer authentication to a `prometheus.write.queue` component (@freak12techno)
Expand All @@ -104,13 +154,15 @@ v1.5.0

- Add `proxy_url` to `otelcol.exporter.otlphttp`. (@wildum)

- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig)

### Bugfixes

- Fixed a bug in `import.git` which caused a `"non-fast-forward update"` error message. (@ptodev)

- Do not log error on clean shutdown of `loki.source.journal`. (@thampiotr)
- Do not log error on clean shutdown of `loki.source.journal`. (@thampiotr)

- `prometheus.operator.*` components: Fixed a bug which would sometimes cause a
- `prometheus.operator.*` components: Fixed a bug which would sometimes cause a
"failed to create service discovery refresh metrics" error after a config reload. (@ptodev)

### Other changes
Expand Down Expand Up @@ -149,7 +201,7 @@ v1.4.3

- `pyroscope.scrape` no longer tries to scrape endpoints which are not active targets anymore. (@wildum @mattdurham @dehaansa @ptodev)

- Fixed a bug with `loki.source.podlogs` not starting in large clusters due to short informer sync timeout. (@elburnetto-intapp)
- Fixed a bug with `loki.source.podlogs` not starting in large clusters due to short informer sync timeout. (@elburnetto-intapp)

- `prometheus.exporter.windows`: Fixed bug with `exclude` regular expression config arguments which caused missing metrics. (@ptodev)

Expand All @@ -168,7 +220,7 @@ v1.4.2
- Fix parsing of the Level configuration attribute in debug_metrics config block
- Ensure "optional" debug_metrics config block really is optional

- Fixed an issue with `loki.process` where `stage.luhn` and `stage.timestamp` would not apply
- Fixed an issue with `loki.process` where `stage.luhn` and `stage.timestamp` would not apply
default configuration settings correctly (@thampiotr)

- Fixed an issue with `loki.process` where configuration could be reloaded even if there
Expand Down Expand Up @@ -242,6 +294,8 @@ v1.4.0

- Add the label `alloy_cluster` in the metric `alloy_config_hash` when the flag `cluster.name` is set to help differentiate between
configs from the same alloy cluster or different alloy clusters. (@wildum)

- Add support for discovering the cgroup path(s) of a process in `process.discovery`. (@mahendrapaipuri)

### Bugfixes

Expand Down
5 changes: 3 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,6 @@
/docs/sources/ @clayton-cornell

# Components:
/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers
/internal/component/beyla/ @marctc
/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers
/internal/component/beyla/ @marctc
/internal/component/database_observability/ @cristiangreco @matthewnolf
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md.t
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/collect/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ weight: 100

# Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}}

{{< section >}}
{{< section >}}
13 changes: 6 additions & 7 deletions docs/sources/collect/choose-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want
## Metrics for infrastructure

Use `prometheus.*` components to collect infrastructure metrics.
This will give you the best experience with [Grafana Infrastructure Observability][].
This gives you the best experience with [Grafana Infrastructure Observability][].

For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.

You can also scrape any Prometheus endpoint using `prometheus.scrape`.
Use `discovery.*` components to find targets for `prometheus.scrape`.
Expand All @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`.
## Metrics for applications

Use `otelcol.receiver.*` components to collect application metrics.
This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.

For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications.

Expand All @@ -48,12 +47,12 @@ with logs collected by `loki.*` components.

For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`.
On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`,
which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem.
which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem.

## Logs from applications

Use `otelcol.receiver.*` components to collect application logs.
This will gather the application logs in an OpenTelemetry-native way, making it easier to
This gathers the application logs in an OpenTelemetry-native way, making it easier to
correlate the logs with OpenTelemetry metrics and traces coming from the application.
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.

Expand All @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri

Use `otelcol.receiver.*` components to collect traces.

If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.

## Profiles

Expand Down
42 changes: 20 additions & 22 deletions docs/sources/collect/datadog-traces-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ This topic describes how to:

## Before you begin

* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
* Identify where you will write the collected telemetry.
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
* Identify where to write the collected telemetry.
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

Expand All @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
* _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

```alloy
otelcol.processor.deltatocumulative "default" {
max_stale = <MAX_STALE>
max_stale = "<MAX_STALE>"
max_streams = <MAX_STREAMS>
output {
metrics = [otelcol.processor.batch.default.input]
Expand All @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
* _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
* _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.

1. Add the following `otelcol.receiver.datadog` component to your configuration file.

```alloy
otelcol.receiver.datadog "default" {
endpoint = <HOST>:<PORT>
endpoint = "<HOST>:<PORT>"
output {
metrics = [otelcol.processor.deltatocumulative.default.input]
traces = [otelcol.processor.batch.default.input]
Expand All @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<HOST>`_: The host address where the receiver will listen.
- _`<PORT>`_: The port where the receiver will listen.
* _`<HOST>`_: The host address where the receiver listens.
* _`<PORT>`_: The port where the receiver listens.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -139,19 +139,19 @@ We recommend this approach for current Datadog users who want to try using {{< p

Replace the following:

- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
* _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
* _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.

Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
You can do this by setting up your Datadog Agent in the following way:

1. Replace the DD_URL in the configuration YAML:

```yaml
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
```
Or by setting an environment variable:
Or by setting an environment variable:
```bash
DD_DD_URL='{"http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>": ["datadog-receiver"]}'
Expand All @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit
[Datadog]: https://www.datadoghq.com/
[Datadog Agent]: https://docs.datadoghq.com/agent/
[Prometheus]: https://prometheus.io
[OTLP]: https://opentelemetry.io/docs/specs/otlp/
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[Components]: ../../get-started/components
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/
[Components]: ../../get-started/components/
Loading

0 comments on commit d6f61c4

Please sign in to comment.