Skip to content

Commit

Permalink
Merge branch 'main' into loki-15206-helm-chart-fix-misconfigured-yaml…
Browse files Browse the repository at this point in the history
…-templates
  • Loading branch information
mericks committed Dec 4, 2024
2 parents 52e7070 + 03fa28e commit adcc0c8
Show file tree
Hide file tree
Showing 70 changed files with 8,525 additions and 4,792 deletions.
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -865,6 +865,7 @@ trivy: loki-image build-image
snyk: loki-image build-image
snyk container test $(IMAGE_PREFIX)/loki:$(IMAGE_TAG) --file=cmd/loki/Dockerfile
snyk container test $(IMAGE_PREFIX)/loki-build-image:$(IMAGE_TAG) --file=loki-build-image/Dockerfile
snyk container test $(IMAGE_PREFIX)/promtail:$(IMAGE_TAG) --file=clients/cmd/promtail/Dockerfile
snyk code test

.PHONY: scan-vulnerabilities
Expand Down
2 changes: 1 addition & 1 deletion clients/cmd/fluentd/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ COPY . /src/loki
WORKDIR /src/loki
RUN make BUILD_IN_CONTAINER=false fluentd-plugin

FROM fluent/fluentd:v1.17-debian-1
FROM fluent/fluentd:v1.18-debian-1
ENV LOKI_URL="https://logs-prod-us-central1.grafana.net"

COPY --from=build /src/loki/clients/cmd/fluentd/lib/fluent/plugin/out_loki.rb /fluentd/plugins/out_loki.rb
Expand Down
2 changes: 1 addition & 1 deletion clients/cmd/fluentd/docker/Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@

source 'https://rubygems.org'

gem 'fluentd', '1.17.1'
gem 'fluentd', '1.18.0'
gem 'fluent-plugin-multi-format-parser', '~>1.1.0'
11 changes: 6 additions & 5 deletions clients/cmd/promtail/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,13 @@ WORKDIR /src/loki
RUN apt-get update && apt-get install -qy libsystemd-dev
RUN make clean && make BUILD_IN_CONTAINER=false PROMTAIL_JOURNAL_ENABLED=true promtail

# Promtail requires debian as the base image to support systemd journal reading
FROM debian:12.8-slim
# Promtail requires debian or ubuntu as the base image to support systemd journal reading
FROM public.ecr.aws/ubuntu/ubuntu:noble
# tzdata required for the timestamp stage to work
RUN apt-get update && \
apt-get install -qy tzdata ca-certificates libsystemd-dev && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install dependencies needed at runtime.
RUN apt-get update \
&& apt-get install -qy libsystemd-dev tzdata ca-certificates \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=build /src/loki/clients/cmd/promtail/promtail /usr/bin/promtail
COPY clients/cmd/promtail/promtail-docker-config.yaml /etc/promtail/config.yml
ENTRYPOINT ["/usr/bin/promtail"]
Expand Down
4 changes: 2 additions & 2 deletions clients/cmd/promtail/Dockerfile.arm32
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ WORKDIR /src/loki
RUN apt-get update && apt-get install -qy libsystemd-dev
RUN make clean && make BUILD_IN_CONTAINER=false PROMTAIL_JOURNAL_ENABLED=true promtail

# Promtail requires debian as the base image to support systemd journal reading
FROM debian:12.8-slim
# Promtail requires debian or ubuntu as the base image to support systemd journal reading
FROM public.ecr.aws/ubuntu/ubuntu:noble
# tzdata required for the timestamp stage to work
RUN apt-get update && \
apt-get install -qy tzdata ca-certificates wget libsystemd-dev && \
Expand Down
4 changes: 2 additions & 2 deletions clients/cmd/promtail/Dockerfile.cross
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ COPY . /src/loki
WORKDIR /src/loki
RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false PROMTAIL_JOURNAL_ENABLED=true promtail

# Promtail requires debian as the base image to support systemd journal reading
FROM debian:12.8-slim
# Promtail requires debian or ubuntu as the base image to support systemd journal reading
FROM public.ecr.aws/ubuntu/ubuntu:noble
# tzdata required for the timestamp stage to work
RUN apt-get update && \
apt-get install -qy tzdata ca-certificates wget libsystemd-dev && \
Expand Down
10 changes: 5 additions & 5 deletions docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ weight: 100
---
# Configuration best practices

Grafana Loki is under active development, and we are constantly working to improve performance. But here are some of the most current best practices for configuration that will give you the best experience with Loki.
Grafana Loki is under active development, and the Loki team is constantly working to improve performance. But here are some of the most current best practices for configuration that will give you the best experience with Loki.

## Configure caching

Expand Down Expand Up @@ -36,7 +36,7 @@ If Loki received these two lines which are for the same stream, everything would
{job="syslog"} 00:00:01 i'm a syslog! <- Rejected out of order!
```

What can we do about this? What if this was because the sources of these logs were different systems? We can solve this with an additional label which is unique per system:
What can you do about this? What if this was because the sources of these logs were different systems? You can solve this with an additional label which is unique per system:

```
{job="syslog", instance="host1"} 00:00:00 i'm a syslog!
Expand All @@ -56,9 +56,9 @@ Using `chunk_target_size` instructs Loki to try to fill all chunks to a target _

Other configuration variables affect how full a chunk can get. Loki has a default `max_chunk_age` of 2h and `chunk_idle_period` of 30m to limit the amount of memory used as well as the exposure of lost logs if the process crashes.

Depending on the compression used (we have been using snappy which has less compressibility but faster performance), you need 5-10x or 7.5-10MB of raw log data to fill a 1.5MB chunk. Remembering that a chunk is per stream, the more streams you break up your log files into, the more chunks that sit in memory, and the higher likelihood they get flushed by hitting one of those timeouts mentioned above before they are filled.
Depending on the compression used (Loki has been using snappy which has less compressibility but faster performance), you need 5-10x or 7.5-10MB of raw log data to fill a 1.5MB chunk. Remembering that a chunk is per stream, the more streams you break up your log files into, the more chunks that sit in memory, and the higher likelihood they get flushed by hitting one of those timeouts mentioned above before they are filled.

Lots of small, unfilled chunks negatively affect Loki. We are always working to improve this and may consider a compactor to improve this in some situations. But, in general, the guidance should stay about the same: try your best to fill chunks.
Lots of small, unfilled chunks negatively affect Loki. The team is always working to improve this and may consider a compactor to improve this in some situations. But, in general, the guidance should stay about the same: try your best to fill chunks.

If you have an application that can log fast enough to fill these chunks quickly (much less than `max_chunk_age`), then it becomes more reasonable to use dynamic labels to break that up into separate streams.

Expand All @@ -68,4 +68,4 @@ Loki and Promtail have flags which will dump the entire config object to stderr

`-print-config-stderr` works well when invoking Loki from the command line, as you can get a quick output of the entire Loki configuration.

`-log-config-reverse-order` is the flag we run Loki with in all our environments. The configuration entries are reversed, so that the order of the configuration reads correctly top to bottom when viewed in Grafana's Explore.
`-log-config-reverse-order` is the flag Grafana runs Loki with in all our environments. The configuration entries are reversed, so that the order of the configuration reads correctly top to bottom when viewed in Grafana's Explore.
6 changes: 3 additions & 3 deletions docs/sources/configure/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ When a new schema is released and you want to gain the advantages it provides, y

First, you'll want to create a new [period_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config) entry in your [schema_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#schema_config). The important thing to remember here is to set this at some point in the _future_ and then roll out the config file changes to Loki. This allows the table manager to create the required table in advance of writes and ensures that existing data isn't queried as if it adheres to the new schema.

As an example, let's say it's 2023-07-14 and we want to start using the `v13` schema on the 20th:
As an example, let's say it's 2023-07-14 and you want to start using the `v13` schema on the 20th:

```yaml
schema_config:
Expand All @@ -214,7 +214,7 @@ schema_config:
period: 24h
```

It's that easy; we just created a new entry starting on the 20th.
It's that easy; you just created a new entry starting on the 20th.

## Retention

Expand Down Expand Up @@ -485,7 +485,7 @@ schema_config:

### On premise deployment (MinIO Single Store)

We configure MinIO by using the AWS config because MinIO implements the S3 API:
You configure MinIO by using the AWS config because MinIO implements the S3 API:

```yaml
storage_config:
Expand Down
1 change: 1 addition & 0 deletions docs/sources/send-data/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ By adding our output plugin you can quickly try Loki without doing big configura
These third-party clients also enable sending logs to Loki:

- [Cribl Loki Destination](https://docs.cribl.io/stream/destinations-loki)
- [GrafanaLokiLogger](https://github.com/antoniojmsjr/GrafanaLokiLogger) (Delphi/Lazarus)
- [ilogtail](https://github.com/alibaba/ilogtail) (Go)
- [Log4j2 appender for Loki](https://github.com/tkowalcz/tjahzi) (Java)
- [loki-logback-appender](https://github.com/loki4j/loki-logback-appender) (Java)
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/send-data/otel/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For ingesting logs to Loki using the OpenTelemetry Collector, you must use the [

When logs are ingested by Loki using an OpenTelemetry protocol (OTLP) ingestion endpoint, some of the data is stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}).

You must set `allow_structured_metadata` to `true` within your Loki config file. Otherwise, Loki will reject the log payload as malformed.
You must set `allow_structured_metadata` to `true` within your Loki config file. Otherwise, Loki will reject the log payload as malformed. Note that Structured Metadata is enabled by default in Loki 3.0 and later.

```yaml
limits_config:
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/send-data/promtail/stages/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ type: Counter
[max_idle_duration: <string>]

config:
# If present and true all log lines will be counted without
# attempting to match the source to the extract map.
# If present and true all log lines will be counted without attempting
# to match the `value` to the field specified by `source` in the extracted map.
# It is an error to specify `match_all: true` and also specify a `value`
[match_all: <bool>]

Expand Down Expand Up @@ -231,7 +231,7 @@ This pipeline first tries to find text in the format `order_status=<value>` in
the log line, pulling out the `<value>` into the extracted map with the key
`order_status`.

The metric stages creates `successful_orders_total` and `failed_orders_total`
The metrics stage creates `successful_orders_total` and `failed_orders_total`
metrics that only increment when the value of `order_status` in the extracted
map is `success` or `fail` respectively.

Expand Down Expand Up @@ -265,7 +265,7 @@ number in the `retries` field from the extracted map.
- metrics:
http_response_time_seconds:
type: Histogram
description: "length of each log line"
description: "distribution of log response time"
source: response_time
config:
buckets: [0.001,0.0025,0.005,0.010,0.025,0.050]
Expand Down
Loading

0 comments on commit adcc0c8

Please sign in to comment.