Skip to content

Commit

Permalink
Merge branch 'logcli-sandbox' of https://github.com/grafana/loki into…
Browse files Browse the repository at this point in the history
… logcli-sandbox
  • Loading branch information
Jayclifford345 committed Dec 11, 2024
2 parents ad1f626 + 660f934 commit 8dff1b9
Show file tree
Hide file tree
Showing 169 changed files with 23,291 additions and 19,846 deletions.
14 changes: 2 additions & 12 deletions .github/renovate.json5
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,7 @@
"matchManagers": ["helm-requirements", "helm-values", "helmv3"],
"groupName": "helm-{{packageName}}",
"matchUpdateTypes": ["major", "minor", "patch"],
"autoApprove": false,
"automerge": false
},
{
// Separate out lambda-promtail updates from other dependencies
// Don't automatically merge lambda-promtail updates
// Updates to this require the nix SHA to be updated
"matchFileNames": ["tools/lambda-promtail/go.mod"],
"groupName": "lambdapromtail-{{packageName}}",
"enabled": true,
"matchUpdateTypes": ["major", "minor", "patch"],
"matchPackageNames": ["!grafana/loki"], // This is updated via a different job
"autoApprove": false,
"automerge": false
},
Expand All @@ -71,7 +61,7 @@
},
{
// Enable all other updates
"matchFileNames": ["!tools/lambda-promtail/go.mod", "!operator/go.mod", "!operator/api/loki/go.mod"],
"matchFileNames": ["!operator/go.mod", "!operator/api/loki/go.mod"],
"groupName": "{{packageName}}",
"enabled": true,
"matchUpdateTypes": ["major", "minor", "patch"],
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/nix-ci.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
---
name: "Lint And Build Nix Flake"
on:
push:
branches:
- main
pull_request:
paths:
- "flake.nix"
Expand Down
4 changes: 4 additions & 0 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,9 @@
# The observability logs team is listed as co-codeowner for grammar file. This is to receive notifications about updates, so these can be implemented in https://github.com/grafana/lezer-logql
/pkg/logql/syntax/expr.y @grafana/observability-logs @grafana/loki-team

# Nix
/nix/ @trevorwhitney
flake.nix @trevorwhitney

# No owners - allows sub-maintainers to merge changes.
CHANGELOG.md
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -384,7 +384,7 @@ test: all ## run the unit tests
cd tools/lambda-promtail/ && $(GOTEST) -covermode=atomic -coverprofile=lambda-promtail-coverage.txt -p=4 ./... | tee lambda_promtail_test_results.txt

test-integration:
$(GOTEST) -count=1 -v -tags=integration -timeout 10m ./integration
$(GOTEST) -count=1 -v -tags=integration -timeout 15m ./integration

compare-coverage:
./tools/diff_coverage.sh $(old) $(new) $(packages)
Expand Down
4 changes: 2 additions & 2 deletions clients/cmd/docker-driver/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ WORKDIR /src/loki
ARG GOARCH
RUN make clean && make BUILD_IN_CONTAINER=false GOARCH=${GOARCH} clients/cmd/docker-driver/docker-driver

FROM alpine:3.20.3 AS temp
FROM alpine:3.21.0 AS temp

ARG GOARCH

RUN apk add --update --no-cache --arch=${GOARCH} ca-certificates tzdata

FROM --platform=linux/${GOARCH} alpine:3.20.3
FROM --platform=linux/${GOARCH} alpine:3.21.0

COPY --from=temp /etc/ca-certificates.conf /etc/ca-certificates.conf
COPY --from=temp /usr/share/ca-certificates /usr/share/ca-certificates
Expand Down
2 changes: 1 addition & 1 deletion clients/cmd/promtail/Dockerfile.debug
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ WORKDIR /src/loki
RUN make clean && make BUILD_IN_CONTAINER=false PROMTAIL_JOURNAL_ENABLED=true promtail-debug


FROM alpine:3.20.3
FROM alpine:3.21.0
RUN apk add --update --no-cache ca-certificates tzdata
COPY --from=build /src/loki/clients/cmd/promtail/promtail-debug /usr/bin/promtail-debug
COPY --from=build /usr/bin/dlv /usr/bin/dlv
Expand Down
3 changes: 3 additions & 0 deletions cmd/loki/loki-local-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ query_range:
enabled: true
max_size_mb: 100

limits_config:
metric_aggregation_enabled: true

schema_config:
configs:
- from: 2020-10-24
Expand Down
11 changes: 11 additions & 0 deletions docs/sources/query/metric_queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,3 +153,14 @@ Examples:
or
vector(0) # will return 0
```
## Probabilistic aggregation
The `topk` keyword lets you find the largest 1,000 elements in a data stream by sample size. When `topk` hits the maximum series limit, LogQL also supports using a probable approximation; `approx_topk` is a drop-in replacement when `topk` hits the maximum series limit.
```logql
approx_topk(k, <vector expression>)
```

It is only supported for instant queries and does not support grouping. It is useful when the cardinality of the inner
vector is too high, for example, when it uses an aggregation by a structured metadata label.
57 changes: 37 additions & 20 deletions docs/sources/send-data/alloy/examples/alloy-kafka-logs.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
---
title: Sending Logs to Loki via Kafka using Alloy
menuTitle: Sending Logs to Loki via Kafka using Alloy
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
weight: 250
killercoda:
title: Sending Logs to Loki via Kafka using Alloy
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
backend:
imageid: ubuntu
---

<!-- vale Grafana.We = NO -->
<!-- INTERACTIVE page intro.md START -->

# Sending Logs to Loki via Kafka using Alloy
# Sending Logs to Loki via Kafka using Alloy

Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods:

- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.

Expand All @@ -38,9 +39,10 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos
{{< /admonition >}}
<!-- INTERACTIVE ignore END -->


## Scenario

In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services:

- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in.
- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created.
- **Simulation Service:** Generates sensor data for each plant.
Expand All @@ -50,7 +52,8 @@ In this scenario, we have a microservices application called the Carnivorous Gre
- **Database:** A database that stores user and plant data.

Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics:
- `loki`: This sends a structured log formatted message (json).

- `loki`: This sends a structured log formatted message (json).
- `otlp`: This sends a serialized OpenTelemetry log message.

You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka.
Expand All @@ -69,7 +72,8 @@ In this step, we will set up our environment by cloning the repository that cont
git clone -b microservice-kafka https://github.com/grafana/loki-fundamentals.git
```
<!-- INTERACTIVE exec END -->
1. Next we will spin up our observability stack using Docker Compose:

1. Next we will spin up our observability stack using Docker Compose:

<!-- INTERACTIVE ignore START -->
```bash
Expand All @@ -80,14 +84,15 @@ In this step, we will set up our environment by cloning the repository that cont
{{< docs/ignore >}}

<!-- INTERACTIVE exec START -->
```bash
```bash
docker-compose -f loki-fundamentals/docker-compose.yml up -d
```
<!-- INTERACTIVE exec END -->

{{< /docs/ignore >}}

This will spin up the following services:

```console
✔ Container loki-fundamentals-grafana-1 Started
✔ Container loki-fundamentals-loki-1 Started
Expand All @@ -97,6 +102,7 @@ In this step, we will set up our environment by cloning the repository that cont
```

We will be access two UI interfaces:

- Alloy at [http://localhost:12345](http://localhost:12345)
- Grafana at [http://localhost:3000](http://localhost:3000)
<!-- INTERACTIVE page step1.md END -->
Expand All @@ -107,12 +113,13 @@ We will be access two UI interfaces:

In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration.

### Open your Code Editor and Locate the `config.alloy` file
### Open your code editor and locate the `config.alloy` file

Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor:

{{< docs/ignore >}}
**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.**

1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab.
1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory).
1. Click on the `config.alloy` file to open it in the code editor.
Expand All @@ -126,13 +133,14 @@ Grafana Alloy requires a configuration file to define the components and their r
You will copy all three of the following configuration snippets into the `config.alloy` file.
### Source logs from kafka
### Source logs from Kafka
First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`.
Add the following configuration to the `config.alloy` file:
```alloy
loki.source.kafka "raw" {
brokers = ["kafka:9092"]
Expand All @@ -145,6 +153,7 @@ loki.source.kafka "raw" {
```
In this configuration:
- `brokers`: The Kafka brokers to connect to.
- `topics`: The Kafka topics to consume. In this case, we are consuming the `loki` topic.
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`.
Expand All @@ -159,6 +168,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf
Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component.
Now add the following configuration to the `config.alloy` file:
```alloy
loki.relabel "kafka" {
forward_to = [loki.write.http.receiver]
Expand All @@ -170,6 +180,7 @@ loki.relabel "kafka" {
```
In this configuration:
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component.
- `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`.
Expand All @@ -180,6 +191,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel
Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format.
And finally, add the following configuration to the `config.alloy` file:
```alloy
loki.write "http" {
endpoint {
Expand All @@ -189,6 +201,7 @@ loki.write "http" {
```
In this configuration:
- `endpoint`: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint.
For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/).
Expand All @@ -209,7 +222,6 @@ The new configuration will be loaded. You can verify this by checking the Alloy
If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file:
<!-- INTERACTIVE exec START -->
```bash
cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy
Expand All @@ -225,16 +237,16 @@ curl -X POST http://localhost:12345/-/reload
Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components.
### Open your Code Editor and Locate the `config.alloy` file
### Open your code editor and locate the `config.alloy` file
Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file.
### Source OpenTelemetry logs from Kafka
First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
First, we will configure the OpenTelemetry Kafka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
Now add the following configuration to the `config.alloy` file:
```alloy
otelcol.receiver.kafka "default" {
brokers = ["kafka:9092"]
Expand All @@ -249,6 +261,7 @@ otelcol.receiver.kafka "default" {
```
In this configuration:
- `brokers`: The Kafka brokers to connect to.
- `protocol_version`: The Kafka protocol version to use.
- `topic`: The Kafka topic to consume. In this case, we are consuming the `otlp` topic.
Expand All @@ -257,12 +270,12 @@ In this configuration:
For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/).
### Batch OpenTelemetry logs before sending
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
Now add the following configuration to the `config.alloy` file:
```alloy
otelcol.processor.batch "default" {
output {
Expand All @@ -272,6 +285,7 @@ otelcol.processor.batch "default" {
```
In this configuration:
- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`.
For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/).
Expand All @@ -281,6 +295,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
Finally, add the following configuration to the `config.alloy` file:
```alloy
otelcol.exporter.otlphttp "default" {
client {
Expand All @@ -290,6 +305,7 @@ otelcol.exporter.otlphttp "default" {
```
In this configuration:
- `client`: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint.
For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/).
Expand Down Expand Up @@ -341,7 +357,6 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
```
<!-- INTERACTIVE ignore END -->
{{< docs/ignore >}}
<!-- INTERACTIVE exec START -->
Expand All @@ -353,6 +368,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
{{< /docs/ignore >}}
This will start the following services:
```console
✔ Container greenhouse-db-1 Started
✔ Container greenhouse-websocket_service-1 Started
Expand All @@ -372,7 +388,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l
Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore).
<!-- INTERACTIVE page step4.md END -->
<!-- INTERACTIVE page finish.md START -->
Expand All @@ -383,14 +398,16 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All
{{< docs/ignore >}}
### Back to Docs
### Back to docs
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)
{{< /docs/ignore >}}
## Further reading
For more information on Grafana Alloy, refer to the following resources:
- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/)
- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/)
Expand All @@ -400,5 +417,5 @@ If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, y
The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud.
<!-- INTERACTIVE page finish.md END -->
<!-- INTERACTIVE page finish.md END -->
<!-- vale Grafana.We = YES -->
Loading

0 comments on commit 8dff1b9

Please sign in to comment.