Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix doc-validate errors due to config move (#12662) #12665

Merged
merged 1 commit into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/sources/alert/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ ruler:
url: http://localhost:9090/api/v1/write
```

Further configuration options can be found under [ruler]({{< relref "../configure#ruler" >}}).
Further configuration options can be found under [ruler](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#ruler).

### Operations

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Loki can cache data at many levels, which can drastically improve performance. D

## Time ordering of logs

Loki [accepts out-of-order writes]({{< relref "../configure#accept-out-of-order-writes" >}}) _by default_.
Loki [accepts out-of-order writes](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#accept-out-of-order-writes) _by default_.
This section identifies best practices when Loki is _not_ configured to accept out-of-order writes.

One issue many people have with Loki is their client receiving errors for out of order log entries. This happens because of this hard and fast rule within Loki:
Expand Down
6 changes: 4 additions & 2 deletions docs/sources/get-started/components.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,9 @@ Currently the only way the distributor mutates incoming data is by normalizing l

The distributor can also rate limit incoming logs based on the maximum per-tenant bitrate. It does this by checking a per tenant limit and dividing it by the current number of distributors. This allows the rate limit to be specified per tenant at the cluster level and enables us to scale the distributors up or down and have the per-distributor limit adjust accordingly. For instance, say we have 10 distributors and tenant A has a 10MB rate limit. Each distributor will allow up to 1MB/second before limiting. Now, say another large tenant joins the cluster and we need to spin up 10 more distributors. The now 20 distributors will adjust their rate limits for tenant A to `(10MB / 20 distributors) = 500KB/s`! This is how global limits allow much simpler and safer operation of the Loki cluster.

**Note: The distributor uses the `ring` component under the hood to register itself amongst its peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration]({{< relref "../configure#distributor" >}}).**
{{% admonition type="note" %}}
The distributor uses the `ring` component under the hood to register itself amongst its peers and get the total number of active distributors. This is a different "key" than the ingesters use in the ring and comes from the distributor's own [ring configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor).
{{% /admonition %}}

### Forwarding

Expand Down Expand Up @@ -142,7 +144,7 @@ deduplicated.

### Timestamp Ordering

Loki is configured to [accept out-of-order writes]({{< relref "../configure#accept-out-of-order-writes" >}}) by default.
Loki is configured to [accept out-of-order writes](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#accept-out-of-order-writes) by default.

When not configured to accept out-of-order writes, the ingester validates that ingested log lines are in order. When an
ingester receives a log line that doesn't follow the expected order, the line
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/deployment-modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ The simplest mode of operation is the monolithic deployment mode. You enable mon

Monolithic mode is useful for getting started quickly to experiment with Loki, as well as for small read/write volumes of up to approximately 20GB per day.

You can horizontally scale a monolithic mode deployment to more instances by using a shared object store, and by configuring the [`ring` section]({{< relref "../configure#common" >}}) of the `loki.yaml` file to share state between all instances, but the recommendation is to use simple scalable mode if you need to scale your deployment.
You can horizontally scale a monolithic mode deployment to more instances by using a shared object store, and by configuring the [`ring` section](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) of the `loki.yaml` file to share state between all instances, but the recommendation is to use simple scalable mode if you need to scale your deployment.

You can configure high availability by running two Loki instances using `memberlist_config` configuration and a shared object store and setting the `replication_factor` to `3`. You route traffic to all the Loki instances in a round robin fashion.

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/hash-rings.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ For each node, the key-value store holds:

## Configuring rings

Define [ring configuration]({{< relref "../configure#common" >}}) within the `common.ring_config` block.
Define [ring configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) within the `common.ring_config` block.

Use the default `memberlist` key-value store type unless there is
a compelling reason to use a different key-value store type.
Expand Down
14 changes: 11 additions & 3 deletions docs/sources/operations/automatic-stream-sharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,28 @@ existing streams. When properly tuned, this should eliminate issues where log pr
per-stream rate limit.

**To enable automatic stream sharding:**
1. Edit the global [limits_config]({{< relref "../configure#limits_config" >}}) of the Loki configuration file:
1. Edit the global [`limits_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config) of the Loki configuration file:

```yaml
limits_config:
shard_streams:
enabled: true
```
2. Optionally lower the `desired_rate` in bytes if you find that the system is still hitting the `per_stream_rate_limit`:

1. Optionally lower the `desired_rate` in bytes if you find that the system is still hitting the `per_stream_rate_limit`:

```yaml
limits_config:
shard_streams:
enabled: true
desired_rate: 2097152 #2MiB
```
3. Optionally enable `logging_enabled` for debugging stream sharding. **Note**: this may affect the ingestion performance of Loki.

1. Optionally enable `logging_enabled` for debugging stream sharding.
{{% admonition type="note" %}}
This may affect the ingestion performance of Loki.
{{% /admonition %}}

```yaml
limits_config:
shard_streams:
Expand Down
7 changes: 4 additions & 3 deletions docs/sources/operations/blocking-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In certain situations, you may not be able to control the queries being sent to
may be intentionally or unintentionally expensive to run, and they may affect the overall stability or cost of running
your service.

You can block queries using [per-tenant overrides]({{< relref "../configure#runtime-configuration-file" >}}), like so:
You can block queries using [per-tenant overrides](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file), like so:

```yaml
overrides:
Expand All @@ -34,8 +34,9 @@ overrides:
- hash: 2943214005 # hash of {stream="stdout",pod="loki-canary-9w49x"}
types: filter,limited
```

NOTE: changes to these configurations **do not require a restart**; they are defined in the [runtime configuration file]({{< relref "../configure#runtime-configuration-file" >}}).
{{% admonition type="note" %}}
Changes to these configurations **do not require a restart**; they are defined in the [runtime configuration file](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file).
{{% /admonition %}}

The available query types are:

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/overrides-exporter.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Loki is a multi-tenant system that supports applying limits to each tenant as a

## Context

Configuration updates to tenant limits can be applied to Loki without restart via the [`runtime_config`]({{< relref "../configure#runtime_config" >}}) feature.
Configuration updates to tenant limits can be applied to Loki without restart via the [`runtime_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime_config) feature.

## Example

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/operations/recording-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: Working with recording rules.

Recording rules are evaluated by the `ruler` component. Each `ruler` acts as its own `querier`, in the sense that it
executes queries against the store without using the `query-frontend` or `querier` components. It will respect all query
[limits]({{< relref "../configure#limits_config" >}}) put in place for the `querier`.
[limits](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config) put in place for the `querier`.

Loki's implementation of recording rules largely reuses Prometheus' code.

Expand Down Expand Up @@ -70,8 +70,8 @@ so a `Persistent Volume` should be utilised.
### Per-Tenant Limits

Remote-write can be configured at a global level in the base configuration, and certain parameters tuned specifically on
a per-tenant basis. Most of the configuration options [defined here]({{< relref "../configure#ruler" >}})
have [override options]({{< relref "../configure#limits_config" >}}) (which can be also applied at runtime!).
a per-tenant basis. Most of the configuration options [defined here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#ruler)
have [override options](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config) (which can be also applied at runtime!).

### Tuning

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/operations/storage/boltdb-shipper.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,14 +105,14 @@ Within Kubernetes, if you are not using an Index Gateway, we recommend running Q
An Index Gateway downloads and synchronizes the BoltDB index from the Object Storage in order to serve index queries to the Queriers and Rulers over gRPC.
This avoids running Queriers and Rulers with a disk for persistence. Disks can become costly in a big cluster.

To run an Index Gateway, configure [StorageConfig]({{< relref "../../configure#storage_config" >}}) and set the `-target` CLI flag to `index-gateway`.
To connect Queriers and Rulers to the Index Gateway, set the address (with gRPC port) of the Index Gateway with the `-boltdb.shipper.index-gateway-client.server-address` CLI flag or its equivalent YAML value under [StorageConfig]({{< relref "../../configure#storage_config" >}}).
To run an Index Gateway, configure [StorageConfig](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#storage_config) and set the `-target` CLI flag to `index-gateway`.
To connect Queriers and Rulers to the Index Gateway, set the address (with gRPC port) of the Index Gateway with the `-boltdb.shipper.index-gateway-client.server-address` CLI flag or its equivalent YAML value under [StorageConfig](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#storage_config).

When using the Index Gateway within Kubernetes, we recommend using a StatefulSet with persistent storage for downloading and querying index files. This can obtain better read performance, avoids [noisy neighbor problems](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors) by not using the node disk, and avoids the time consuming index downloading step on startup after rescheduling to a new node.

### Write Deduplication disabled

Loki does write deduplication of chunks and index using Chunks and WriteDedupe cache respectively, configured with [ChunkStoreConfig]({{< relref "../../configure#chunk_store_config" >}}).
Loki does write deduplication of chunks and index using Chunks and WriteDedupe cache respectively, configured with [ChunkStoreConfig](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#chunk_store_config).
The problem with write deduplication when using `boltdb-shipper` though is ingesters only keep uploading boltdb files periodically to make them available to all the other services which means there would be a brief period where some of the services would not have received updated index yet.
The problem due to that is if an ingester which first wrote the chunks and index goes down and all the other ingesters which were part of replication scheme skipped writing those chunks and index due to deduplication, we would end up missing those logs from query responses since only the ingester which had the index went down.
This problem would be faced even during rollouts which is quite common.
Expand Down
26 changes: 26 additions & 0 deletions docs/sources/operations/storage/schema/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,32 @@ Loki uses the defined schemas to determine which format to use when storing and

Use of a schema allows Loki to iterate over the storage layer without requiring migration of existing data.

## New Loki installs
For a new Loki install with no previous data, here is an example schema configuration with recommended values

```
schema_config:
configs:
- from: 2024-04-01
object_store: s3
store: tsdb
schema: v13
index:
prefix: index_
period: 24h
```


| Property | Description |
|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
| from | for a new install, this must be a date in the past, use a recent date. Format is YYYY-MM-DD. |
| object_store | s3, azure, gcs, alibabacloud, bos, cos, swift, filesystem, or a named_store (see [StorageConfig](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#storage_config)). |
| store | `tsdb` is the current and only recommended value for store. |
| schema | `v13` is the most recent schema and recommended value. |
| prefix: | any value without spaces is acceptable. |
| period: | must be `24h`. |


## Changing the schema

Here are items to consider when changing the schema; if schema changes are not done properly, a scenario can be created which prevents data from being read.
Expand Down
10 changes: 5 additions & 5 deletions docs/sources/operations/storage/table-manager/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ to store chunks, are not managed by the Table Manager, and a custom bucket polic
should be set to delete old data.

For detailed information on configuring the Table Manager, refer to the
[`table_manager`]({{< relref "../../../configure#table_manager" >}})
[`table_manager`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
section in the Loki configuration document.


Expand All @@ -48,10 +48,10 @@ section in the Loki configuration document.
A periodic table stores the index or chunk data relative to a specific period
of time. The duration of the time range of the data stored in a single table and
its storage type is configured in the
[`schema_config`]({{< relref "../../../configure#schema_config" >}}) configuration
[`schema_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#schema_config) configuration
block.

The [`schema_config`]({{< relref "../../../configure#schema_config" >}}) can contain
The [`schema_config`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#schema_config) can contain
one or more `configs`. Each config, defines the storage used between the day
set in `from` (in the format `yyyy-mm-dd`) and the next config, or "now"
in the case of the last schema config entry.
Expand Down Expand Up @@ -105,7 +105,7 @@ order to make sure that the new table is ready once the current table end
period is reached.

The `creation_grace_period` property - in the
[`table_manager`]({{< relref "../../../configure#table_manager" >}})
[`table_manager`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
configuration block - defines how long before a table should be created.


Expand Down Expand Up @@ -149,7 +149,7 @@ documentation.
A table can be active or inactive.

A table is considered **active** if the current time is within the range:
- Table start period - [`creation_grace_period`]({{< relref "../../../configure#table_manager" >}})
- Table start period - [`creation_grace_period`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
- Table end period + max chunk age (hardcoded to `12h`)

![active_vs_inactive_tables](./table-manager-active-vs-inactive-tables.png)
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/storage/tsdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,12 +75,12 @@ We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `

### Dynamic Query Sharding

Previously we would statically shard queries based on the index row shards configured [here]({{< relref "../../configure#period_config" >}}).
Previously we would statically shard queries based on the index row shards configured [here](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#period_config).
TSDB does Dynamic Query Sharding based on how much data a query is going to be processing.
We additionally store size(KB) and number of lines for each chunk in the TSDB index which is then used by the [Query Frontend]({{< relref "../../get-started/components#query-frontend" >}}) for planning the query.
Based on our experience from operating many Loki clusters, we have configured TSDB to aim for processing 300-600 MBs of data per query shard.
This means with TSDB we will be running more, smaller queries.

### Index Caching not required

TSDB is a compact and optimized format. Loki does not currently use an index cache for TSDB. If you are already using Loki with other index types, it is recommended to keep the index caching until all of your existing data falls out of [retention]({{< relref "./retention" >}}) or your configured `max_query_lookback` under [limits_config]({{< relref "../../configure#limits_config" >}}). After that, we suggest running without an index cache (it isn't used in TSDB).
TSDB is a compact and optimized format. Loki does not currently use an index cache for TSDB. If you are already using Loki with other index types, it is recommended to keep the index caching until all of your existing data falls out of [retention](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/retention/)) or your configured `max_query_lookback` under [limits_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config). After that, we suggest running without an index cache (it isn't used in TSDB).
Loading
Loading