Skip to content

Commit

Permalink
chore: [release-3.1.x] docs: Late review comments and linting (#13716)
Browse files Browse the repository at this point in the history
Co-authored-by: J Stickler <[email protected]>
  • Loading branch information
grafanabot and JStickler authored Jul 30, 2024
1 parent 0406b75 commit 5a725d2
Show file tree
Hide file tree
Showing 29 changed files with 45 additions and 45 deletions.
2 changes: 1 addition & 1 deletion docs/sources/alert/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ The Ruler's Prometheus compatibility further accentuates the marriage between me

### Black box monitoring

We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. Loki's alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures.
We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. The Loki alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures.

### Event alerting

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/configure/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ You can find more detailed information about all of the storage options in the [

## Single Store

Single Store refers to using object storage as the storage medium for both Loki's index as well as its data ("chunks"). There are two supported modes:
Single Store refers to using object storage as the storage medium for both the Loki index as well as its data ("chunks"). There are two supported modes:

### TSDB (recommended)

Expand Down Expand Up @@ -83,7 +83,7 @@ You may use any substitutable services, such as those that implement the S3 API

### Cassandra (deprecated)

Cassandra is a popular database and one of Loki's possible chunk stores and is production safe.
Cassandra is a popular database and one of the possible chunk stores for Loki and is production safe.

{{< collapse title="Title of hidden content" >}}
This storage type for chunks is deprecated and may be removed in future major versions of Loki.
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/get-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: Provides an overview of the steps for implementing Grafana Loki to

{{< youtube id="1uk8LtQqsZQ" >}}

Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

Because all Loki implementations are unique, the installation process is
different for every customer. But there are some steps in the process that
Expand All @@ -26,13 +26,13 @@ To collect logs and view your log data generally involves the following steps:
1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications.
1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file.
1. Add [labels](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki data source](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/):
1. Pick a time range.
1. Choose the Loki datasource.
1. Choose the Loki data source.
1. Use [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.

**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).
**Next steps:** Learn more about the Loki query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).

## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki

Expand Down
8 changes: 4 additions & 4 deletions docs/sources/get-started/architecture.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Loki architecture
menutitle: Architecture
description: Describes Grafana Loki's architecture.
description: Describes the Grafana Loki architecture.
weight: 400
aliases:
- ../architecture/
Expand All @@ -10,8 +10,8 @@ aliases:
# Loki architecture

Grafana Loki has a microservices-based architecture and is designed to run as a horizontally scalable, distributed system.
The system has multiple components that can run separately and in parallel.
Grafana Loki's design compiles the code for all components into a single binary or Docker image.
The system has multiple components that can run separately and in parallel. The
Grafana Loki design compiles the code for all components into a single binary or Docker image.
The `-target` command-line flag controls which component(s) that binary will behave as.

To get started easily, run Grafana Loki in "single binary" mode with all components running simultaneously in one process, or in "simple scalable deployment" mode, which groups components into read, write, and backend parts.
Expand All @@ -20,7 +20,7 @@ Grafana Loki is designed to easily redeploy a cluster under a different mode as

For more information, refer to [Deployment modes]({{< relref "./deployment-modes" >}}) and [Components]({{< relref "./components" >}}).

![Loki's components](../loki_architecture_components.svg "Loki's components")
![Loki components](../loki_architecture_components.svg "Loki components")

## Storage

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/labels/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ Now instead of a regex, we could do this:

Hopefully now you are starting to see the power of labels. By using a single label, you can query many streams. By combining several different labels, you can create very flexible log queries.

Labels are the index to Loki's log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks.
Labels are the index to Loki log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks.

For Loki to be efficient and cost-effective, we have to use labels responsibly. The next section will explore this in more detail.

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/get-started/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ A typical Loki-based logging stack consists of 3 components:

- **Scalability** - Loki is designed for scalability, and can scale from as small as running on a Raspberry Pi to ingesting petabytes a day.
In its most common deployment, “simple scalable mode”, Loki decouples requests into separate read and write paths, so that you can independently scale them, which leads to flexible large-scale installations that can quickly adapt to meet your workload at any given time.
If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes.
If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes.

- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
Expand All @@ -44,7 +44,7 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi
By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD).
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.

- **LogQL, Loki's query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
- **LogQL, the Loki query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. Use Grafana to query the Loki data source.
The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [datasource](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured.
The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [data source](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured.
{{< figure src="/media/docs/loki/grafana-query-builder-v2.png" caption="Grafana Explore" alt="Grafana Explore">}}
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/authentication.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Authentication
menuTitle:
description: Describes Loki's authentication.
description: Describes Loki authentication.
weight:
---
# Authentication
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/meta-monitoring/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Loki exposes the following observability data about itself:
- **Metrics**: Loki provides a `/metrics` endpoint that exports information about Loki in Prometheus format. These metrics provide aggregated metrics of the health of your Loki cluster, allowing you to observe query response times, etc etc.
- **Logs**: Loki emits a detailed log line `metrics.go` for every query, which shows query duration, number of lines returned, query throughput, the specific LogQL that was executed, chunks searched, and much more. You can use these log lines to improve and optimize your query performance.

You can also scrape Loki's logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring").
You can also scrape the Loki logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring").

The Loki [mixin](https://github.com/grafana/loki/blob/main/production/loki-mixin) is an opinionated set of dashboards, alerts and recording rules to monitor your Loki cluster. The mixin provides a comprehensive package for monitoring Loki in production. You can install the mixin into a Grafana instance.

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/meta-monitoring/mixins.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ For an example, see [Collect and forward Prometheus metrics](https://grafana.com

## Configure Grafana

In your Grafana instance, you'll need to [create a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster.
In your Grafana instance, you'll need to [create a Prometheus data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster.

## Install Loki dashboards in Grafana

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/query-fairness/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ curl -s http://localhost:3100/loki/api/v1/query_range?xxx \
```

There is a limit to how deep a path and thus the queue tree can be. This is
controlled by Loki's `-query-scheduler.max-queue-hierarchy-levels` CLI argument
controlled by the Loki `-query-scheduler.max-queue-hierarchy-levels` CLI argument
or its respective YAML configuration block:

```yaml
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/recording-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Recording rules are evaluated by the `ruler` component. Each `ruler` acts as its
executes queries against the store without using the `query-frontend` or `querier` components. It will respect all query
[limits](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config) put in place for the `querier`.

Loki's implementation of recording rules largely reuses Prometheus' code.
The Loki implementation of recording rules largely reuses Prometheus' code.

Samples generated by recording rules are sent to Prometheus using Prometheus' **remote-write** feature.

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/request-validation-rate-limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ This validation error is returned when a stream is submitted without any labels.

The `too_far_behind` and `out_of_order` reasons are identical. Loki clusters with `unordered_writes=true` (the default value as of Loki v2.4) use `reason=too_far_behind`. Loki clusters with `unordered_writes=false` use `reason=out_of_order`.

This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/<LOKI_VERSION>/configuration/#accept-out-of-order-writes) about Loki's ordering constraints.
This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/<LOKI_VERSION>/configuration/#accept-out-of-order-writes) about the Loki ordering constraints.

The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration.

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/storage/_index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Manage storage
menuTitle: Storage
description: Describes Loki's storage needs and supported stores.
description: Describes the Loki storage needs and supported stores.
---
# Manage storage

Expand All @@ -17,7 +17,7 @@ they are compressed as **chunks** and saved in the chunks store. See [chunk
format](#chunk-format) for how chunks are stored internally.

The **index** stores each stream's label set and links them to the individual
chunks. Refer to Loki's [configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) for
chunks. Refer to the Loki [configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) for
details on how to configure the storage and the index.

For more information:
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/storage/legacy-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The usage of legacy storage for new installations is highly discouraged and docu
purposes in case of upgrade to a single store.
{{% /admonition %}}

The **chunk store** is Loki's long-term data store, designed to support
The **chunk store** is the Loki long-term data store, designed to support
interactive querying and sustained writing without the need for background
maintenance tasks. It consists of:

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/operations/storage/wal.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` t

### Backpressure

The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that Loki's attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold.
The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that the Loki attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold.

### Metrics

Expand Down Expand Up @@ -106,15 +106,15 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th

#### Scaling Down Using `/flush_shutdown` Endpoint and Lifecycle Hook

1. **StatefulSets for Ordered Scaling Down**: Loki's ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation.
1. **StatefulSets for Ordered Scaling Down**: The Loki ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation.

2. **Using PreStop Lifecycle Hook**: During the Pod scaling down process, the PreStop [lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) triggers the `/flush_shutdown` endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion.

3. **Using terminationGracePeriodSeconds**: Provides time for the ingester to flush its data before being deleted, if flushing data takes more than 30 minutes, you may need to increase it.

4. **Cleaning Persistent Volumes**: Persistent volumes are automatically cleaned up by leveraging the [enableStatefulSetAutoDeletePVC](https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/) feature in Kubernetes.

By following the above steps, you can ensure a smooth scaling down process for Loki's ingesters while maintaining data integrity and minimizing potential disruptions.
By following the above steps, you can ensure a smooth scaling down process for the Loki ingesters while maintaining data integrity and minimizing potential disruptions.

### Non-Kubernetes or baremetal deployments

Expand Down
Loading

0 comments on commit 5a725d2

Please sign in to comment.