diff --git a/docs/sources/alert/_index.md b/docs/sources/alert/_index.md index 1d99e56d1a560..e12e073c3b889 100644 --- a/docs/sources/alert/_index.md +++ b/docs/sources/alert/_index.md @@ -179,7 +179,7 @@ The Ruler's Prometheus compatibility further accentuates the marriage between me ### Black box monitoring -We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. Loki's alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures. +We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. The Loki alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures. ### Event alerting diff --git a/docs/sources/configure/storage.md b/docs/sources/configure/storage.md index a815b98f98897..27466dbc6e50c 100644 --- a/docs/sources/configure/storage.md +++ b/docs/sources/configure/storage.md @@ -27,7 +27,7 @@ You can find more detailed information about all of the storage options in the [ ## Single Store -Single Store refers to using object storage as the storage medium for both Loki's index as well as its data ("chunks"). There are two supported modes: +Single Store refers to using object storage as the storage medium for both the Loki index as well as its data ("chunks"). There are two supported modes: ### TSDB (recommended) @@ -83,7 +83,7 @@ You may use any substitutable services, such as those that implement the S3 API ### Cassandra (deprecated) -Cassandra is a popular database and one of Loki's possible chunk stores and is production safe. +Cassandra is a popular database and one of the possible chunk stores for Loki and is production safe. {{< collapse title="Title of hidden content" >}} This storage type for chunks is deprecated and may be removed in future major versions of Loki. diff --git a/docs/sources/get-started/_index.md b/docs/sources/get-started/_index.md index c85f383345fbb..27b808e6d3957 100644 --- a/docs/sources/get-started/_index.md +++ b/docs/sources/get-started/_index.md @@ -9,7 +9,7 @@ description: Provides an overview of the steps for implementing Grafana Loki to {{< youtube id="1uk8LtQqsZQ" >}} -Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. +Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Because all Loki implementations are unique, the installation process is different for every customer. But there are some steps in the process that @@ -26,13 +26,13 @@ To collect logs and view your log data generally involves the following steps: 1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications. 1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file. 1. Add [labels](https://grafana.com/docs/loki//get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki//get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.). -1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/). +1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki data source](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/). 1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/): 1. Pick a time range. - 1. Choose the Loki datasource. + 1. Choose the Loki data source. 1. Use [LogQL](https://grafana.com/docs/loki//query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button. -**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki//query/). +**Next steps:** Learn more about the Loki query language, [LogQL](https://grafana.com/docs/loki//query/). ## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki diff --git a/docs/sources/get-started/architecture.md b/docs/sources/get-started/architecture.md index 9caeb717144bd..42b81232b9886 100644 --- a/docs/sources/get-started/architecture.md +++ b/docs/sources/get-started/architecture.md @@ -1,7 +1,7 @@ --- title: Loki architecture menutitle: Architecture -description: Describes Grafana Loki's architecture. +description: Describes the Grafana Loki architecture. weight: 400 aliases: - ../architecture/ @@ -10,8 +10,8 @@ aliases: # Loki architecture Grafana Loki has a microservices-based architecture and is designed to run as a horizontally scalable, distributed system. -The system has multiple components that can run separately and in parallel. -Grafana Loki's design compiles the code for all components into a single binary or Docker image. +The system has multiple components that can run separately and in parallel. The +Grafana Loki design compiles the code for all components into a single binary or Docker image. The `-target` command-line flag controls which component(s) that binary will behave as. To get started easily, run Grafana Loki in "single binary" mode with all components running simultaneously in one process, or in "simple scalable deployment" mode, which groups components into read, write, and backend parts. @@ -20,7 +20,7 @@ Grafana Loki is designed to easily redeploy a cluster under a different mode as For more information, refer to [Deployment modes]({{< relref "./deployment-modes" >}}) and [Components]({{< relref "./components" >}}). -![Loki's components](../loki_architecture_components.svg "Loki's components") +![Loki components](../loki_architecture_components.svg "Loki components") ## Storage diff --git a/docs/sources/get-started/labels/_index.md b/docs/sources/get-started/labels/_index.md index db918450bd9e1..96625b2b13ab6 100644 --- a/docs/sources/get-started/labels/_index.md +++ b/docs/sources/get-started/labels/_index.md @@ -123,7 +123,7 @@ Now instead of a regex, we could do this: Hopefully now you are starting to see the power of labels. By using a single label, you can query many streams. By combining several different labels, you can create very flexible log queries. -Labels are the index to Loki's log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks. +Labels are the index to Loki log data. They are used to find the compressed log content, which is stored separately as chunks. Every unique combination of label and values defines a stream, and logs for a stream are batched up, compressed, and stored as chunks. For Loki to be efficient and cost-effective, we have to use labels responsibly. The next section will explore this in more detail. diff --git a/docs/sources/get-started/overview.md b/docs/sources/get-started/overview.md index 1194398c38f0c..15fc20f330f2c 100644 --- a/docs/sources/get-started/overview.md +++ b/docs/sources/get-started/overview.md @@ -32,7 +32,7 @@ A typical Loki-based logging stack consists of 3 components: - **Scalability** - Loki is designed for scalability, and can scale from as small as running on a Raspberry Pi to ingesting petabytes a day. In its most common deployment, “simple scalable mode”, Loki decouples requests into separate read and write paths, so that you can independently scale them, which leads to flexible large-scale installations that can quickly adapt to meet your workload at any given time. -If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes. +If needed, each of the Loki components can also be run as microservices designed to run natively within Kubernetes. - **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others. Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent. @@ -44,7 +44,7 @@ Similarly, the Loki index, because it indexes only the set of labels, is signifi By leveraging object storage as the only data storage mechanism, Loki inherits the reliability and stability of the underlying object store. It also capitalizes on both the cost efficiency and operational simplicity of object storage over other storage mechanisms like locally attached solid state drives (SSD) and hard disk drives (HDD). The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate. -- **LogQL, Loki's query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs. +- **LogQL, the Loki query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs. The language also facilitates the generation of metrics from log data, a powerful feature that goes well beyond log aggregation. diff --git a/docs/sources/get-started/quick-start.md b/docs/sources/get-started/quick-start.md index f459e564092e1..c6f6dcf4d21d7 100644 --- a/docs/sources/get-started/quick-start.md +++ b/docs/sources/get-started/quick-start.md @@ -97,7 +97,7 @@ Once you have collected logs, you will want to view them. You can view your log 1. Use Grafana to query the Loki data source. - The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [datasource](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured. + The test environment includes [Grafana](https://grafana.com/docs/grafana/latest/), which you can use to query and observe the sample logs generated by the flog application. You can access the Grafana cluster by navigating to [http://localhost:3000](http://localhost:3000). The Grafana instance provided with this demo has a Loki [data source](https://grafana.com/docs/grafana/latest/datasources/loki/) already configured. {{< figure src="/media/docs/loki/grafana-query-builder-v2.png" caption="Grafana Explore" alt="Grafana Explore">}} diff --git a/docs/sources/operations/authentication.md b/docs/sources/operations/authentication.md index 96081dbab52e7..11949a1a9811a 100644 --- a/docs/sources/operations/authentication.md +++ b/docs/sources/operations/authentication.md @@ -1,7 +1,7 @@ --- title: Authentication menuTitle: -description: Describes Loki's authentication. +description: Describes Loki authentication. weight: --- # Authentication diff --git a/docs/sources/operations/meta-monitoring/_index.md b/docs/sources/operations/meta-monitoring/_index.md index eceeab0e648f3..7b90955ef2ad4 100644 --- a/docs/sources/operations/meta-monitoring/_index.md +++ b/docs/sources/operations/meta-monitoring/_index.md @@ -16,7 +16,7 @@ Loki exposes the following observability data about itself: - **Metrics**: Loki provides a `/metrics` endpoint that exports information about Loki in Prometheus format. These metrics provide aggregated metrics of the health of your Loki cluster, allowing you to observe query response times, etc etc. - **Logs**: Loki emits a detailed log line `metrics.go` for every query, which shows query duration, number of lines returned, query throughput, the specific LogQL that was executed, chunks searched, and much more. You can use these log lines to improve and optimize your query performance. -You can also scrape Loki's logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring"). +You can also scrape the Loki logs and metrics and push them to separate instances of Loki and Mimir to provide information about the health of your Loki system (a process known as "meta-monitoring"). The Loki [mixin](https://github.com/grafana/loki/blob/main/production/loki-mixin) is an opinionated set of dashboards, alerts and recording rules to monitor your Loki cluster. The mixin provides a comprehensive package for monitoring Loki in production. You can install the mixin into a Grafana instance. diff --git a/docs/sources/operations/meta-monitoring/mixins.md b/docs/sources/operations/meta-monitoring/mixins.md index 166f2e97fea3a..a4a819c4e3d28 100644 --- a/docs/sources/operations/meta-monitoring/mixins.md +++ b/docs/sources/operations/meta-monitoring/mixins.md @@ -59,7 +59,7 @@ For an example, see [Collect and forward Prometheus metrics](https://grafana.com ## Configure Grafana -In your Grafana instance, you'll need to [create a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster. +In your Grafana instance, you'll need to [create a Prometheus data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/configure-prometheus-data-source/) to visualize the metrics scraped from your Loki cluster. ## Install Loki dashboards in Grafana diff --git a/docs/sources/operations/query-fairness/_index.md b/docs/sources/operations/query-fairness/_index.md index 44b3c15f8f9ad..79c569d5de723 100644 --- a/docs/sources/operations/query-fairness/_index.md +++ b/docs/sources/operations/query-fairness/_index.md @@ -95,7 +95,7 @@ curl -s http://localhost:3100/loki/api/v1/query_range?xxx \ ``` There is a limit to how deep a path and thus the queue tree can be. This is -controlled by Loki's `-query-scheduler.max-queue-hierarchy-levels` CLI argument +controlled by the Loki `-query-scheduler.max-queue-hierarchy-levels` CLI argument or its respective YAML configuration block: ```yaml diff --git a/docs/sources/operations/recording-rules.md b/docs/sources/operations/recording-rules.md index 2254510daf7ee..fd5b8ae6cd5b5 100644 --- a/docs/sources/operations/recording-rules.md +++ b/docs/sources/operations/recording-rules.md @@ -11,7 +11,7 @@ Recording rules are evaluated by the `ruler` component. Each `ruler` acts as its executes queries against the store without using the `query-frontend` or `querier` components. It will respect all query [limits](https://grafana.com/docs/loki//configure/#limits_config) put in place for the `querier`. -Loki's implementation of recording rules largely reuses Prometheus' code. +The Loki implementation of recording rules largely reuses Prometheus' code. Samples generated by recording rules are sent to Prometheus using Prometheus' **remote-write** feature. diff --git a/docs/sources/operations/request-validation-rate-limits.md b/docs/sources/operations/request-validation-rate-limits.md index c5472beac3757..6d67b3d26d2c0 100644 --- a/docs/sources/operations/request-validation-rate-limits.md +++ b/docs/sources/operations/request-validation-rate-limits.md @@ -129,7 +129,7 @@ This validation error is returned when a stream is submitted without any labels. The `too_far_behind` and `out_of_order` reasons are identical. Loki clusters with `unordered_writes=true` (the default value as of Loki v2.4) use `reason=too_far_behind`. Loki clusters with `unordered_writes=false` use `reason=out_of_order`. -This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki//configuration/#accept-out-of-order-writes) about Loki's ordering constraints. +This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki//configuration/#accept-out-of-order-writes) about the Loki ordering constraints. The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki//configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki//configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration. diff --git a/docs/sources/operations/storage/_index.md b/docs/sources/operations/storage/_index.md index b0cea23bd43d7..74fa7620d9085 100644 --- a/docs/sources/operations/storage/_index.md +++ b/docs/sources/operations/storage/_index.md @@ -1,7 +1,7 @@ --- title: Manage storage menuTitle: Storage -description: Describes Loki's storage needs and supported stores. +description: Describes the Loki storage needs and supported stores. --- # Manage storage @@ -17,7 +17,7 @@ they are compressed as **chunks** and saved in the chunks store. See [chunk format](#chunk-format) for how chunks are stored internally. The **index** stores each stream's label set and links them to the individual -chunks. Refer to Loki's [configuration](https://grafana.com/docs/loki//configure/) for +chunks. Refer to the Loki [configuration](https://grafana.com/docs/loki//configure/) for details on how to configure the storage and the index. For more information: diff --git a/docs/sources/operations/storage/legacy-storage.md b/docs/sources/operations/storage/legacy-storage.md index 66a3f76075f13..5ec0859833655 100644 --- a/docs/sources/operations/storage/legacy-storage.md +++ b/docs/sources/operations/storage/legacy-storage.md @@ -12,7 +12,7 @@ The usage of legacy storage for new installations is highly discouraged and docu purposes in case of upgrade to a single store. {{% /admonition %}} -The **chunk store** is Loki's long-term data store, designed to support +The **chunk store** is the Loki long-term data store, designed to support interactive querying and sustained writing without the need for background maintenance tasks. It consists of: diff --git a/docs/sources/operations/storage/wal.md b/docs/sources/operations/storage/wal.md index 2bf9010c948bc..be3761eff02f2 100644 --- a/docs/sources/operations/storage/wal.md +++ b/docs/sources/operations/storage/wal.md @@ -32,7 +32,7 @@ You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` t ### Backpressure -The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that Loki's attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold. +The WAL also includes a backpressure mechanism to allow a large WAL to be replayed within a smaller memory bound. This is helpful after bad scenarios (i.e. an outage) when a WAL has grown past the point it may be recovered in memory. In this case, the ingester will track the amount of data being replayed and once it's passed the `ingester.wal-replay-memory-ceiling` threshold, will flush to storage. When this happens, it's likely that the Loki attempt to deduplicate chunks via content addressable storage will suffer. We deemed this efficiency loss an acceptable tradeoff considering how it simplifies operation and that it should not occur during regular operation (rollouts, rescheduling) where the WAL can be replayed without triggering this threshold. ### Metrics @@ -106,7 +106,7 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th #### Scaling Down Using `/flush_shutdown` Endpoint and Lifecycle Hook -1. **StatefulSets for Ordered Scaling Down**: Loki's ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation. +1. **StatefulSets for Ordered Scaling Down**: The Loki ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation. 2. **Using PreStop Lifecycle Hook**: During the Pod scaling down process, the PreStop [lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) triggers the `/flush_shutdown` endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion. @@ -114,7 +114,7 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th 4. **Cleaning Persistent Volumes**: Persistent volumes are automatically cleaned up by leveraging the [enableStatefulSetAutoDeletePVC](https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/) feature in Kubernetes. -By following the above steps, you can ensure a smooth scaling down process for Loki's ingesters while maintaining data integrity and minimizing potential disruptions. +By following the above steps, you can ensure a smooth scaling down process for the Loki ingesters while maintaining data integrity and minimizing potential disruptions. ### Non-Kubernetes or baremetal deployments diff --git a/docs/sources/operations/zone-ingesters.md b/docs/sources/operations/zone-ingesters.md index ded92065b2255..7467f16ca09f3 100644 --- a/docs/sources/operations/zone-ingesters.md +++ b/docs/sources/operations/zone-ingesters.md @@ -7,7 +7,7 @@ weight: # Zone aware ingesters -Loki's zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones. +The Loki zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones. By default, an incoming log stream's logs are replicated to 3 random ingesters. Except in the case of some replica scaling up or down, a given stream will always be replicated to the same 3 ingesters. This means that if one of those ingesters is restarted no data is lost. However two or more ingesters restarting can result in data loss and also impacts the systems ability to ingest logs because of an unhealthy ring status. diff --git a/docs/sources/query/logcli.md b/docs/sources/query/logcli.md index 0ab4deae5c586..9a7d5b18a6d09 100644 --- a/docs/sources/query/logcli.md +++ b/docs/sources/query/logcli.md @@ -1,7 +1,7 @@ --- title: LogCLI menuTItle: -description: Describes LogCLI, Grafana Loki's command-line interface. +description: Describes LogCLI, the Grafana Loki command-line interface. aliases: - ../getting-started/logcli/ - ../tools/logcli/ diff --git a/docs/sources/release-notes/v3-1.md b/docs/sources/release-notes/v3-1.md index d67370a4acae2..ab4f0f7c3c999 100644 --- a/docs/sources/release-notes/v3-1.md +++ b/docs/sources/release-notes/v3-1.md @@ -146,7 +146,7 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp - **mixins:** Fix compactor matcher in the loki-deletion dashboard ([#12790](https://github.com/grafana/loki/issues/12790)) ([a03846b](https://github.com/grafana/loki/commit/a03846b4367cbb5a0aa445e539d92ae41e3f481a)). - **mixin:** Mixin generation when cluster label is changed ([#12613](https://github.com/grafana/loki/issues/12613)) ([1ba7a30](https://github.com/grafana/loki/commit/1ba7a303566610363c0c36c87e7bc6bb492dfc93)). - **mixin:** dashboards $__auto fix ([#12707](https://github.com/grafana/loki/issues/12707)) ([91ef72f](https://github.com/grafana/loki/commit/91ef72f742fe1f8621af15d8190c5c0d4d613ab9)). -- **mixins:** Add missing log datasource on loki-deletion ([#13011](https://github.com/grafana/loki/issues/13011)) ([1948899](https://github.com/grafana/loki/commit/1948899999107e7f27f4b9faace64942abcdb41f)). +- **mixins:** Add missing log data source on loki-deletion ([#13011](https://github.com/grafana/loki/issues/13011)) ([1948899](https://github.com/grafana/loki/commit/1948899999107e7f27f4b9faace64942abcdb41f)). - **mixins:** Align loki-writes mixins with loki-reads ([#13022](https://github.com/grafana/loki/issues/13022)) ([757b776](https://github.com/grafana/loki/commit/757b776de39bf0fc0c6d1dd74e4a245d7a99023a)). - **mixins:** Remove unnecessary disk panels for SSD read path ([#13014](https://github.com/grafana/loki/issues/13014)) ([8d9fb68](https://github.com/grafana/loki/commit/8d9fb68ae5d4f26ddc2ae184a1cb6a3b2a2c2127)). - **mixins:** Upgrade old plugin for the loki-operational dashboard. ([#13016](https://github.com/grafana/loki/issues/13016)) ([d3c9cec](https://github.com/grafana/loki/commit/d3c9cec22891b45ed1cb93a9eacc5dad6a117fc5)). diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index f75cfcc72ac74..3221f5489f917 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -278,7 +278,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op ### Write OpenTelemetry logs to Loki -Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. +Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. Finally, add the following configuration to the `config.alloy` file: ```alloy diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 5435dc57435eb..1161ba160a3d5 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -17,7 +17,7 @@ killercoda: Alloy natively supports receiving logs in the OpenTelemetry format. This allows you to send logs from applications instrumented with OpenTelemetry to Alloy, which can then be sent to Loki for storage and visualization in Grafana. In this example, we will make use of 3 Alloy components to achieve this: - **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. - **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. -- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. +- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. @@ -167,7 +167,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op ### Export logs to Loki using a OpenTelemetry Exporter -Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. +Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. Now add the following configuration to the `config.alloy` file: ```alloy diff --git a/docs/sources/send-data/fluentd/_index.md b/docs/sources/send-data/fluentd/_index.md index a42cf4d49b142..59ee61efae175 100644 --- a/docs/sources/send-data/fluentd/_index.md +++ b/docs/sources/send-data/fluentd/_index.md @@ -30,7 +30,7 @@ fluent-gem install fluent-plugin-grafana-loki The Docker image `grafana/fluent-plugin-loki:main` contains [default configuration files](https://github.com/grafana/loki/tree/main/clients/cmd/fluentd/docker/conf). By default, fluentd containers use that default configuration. You can instead specify your `fluentd.conf` configuration file with a `FLUENTD_CONF` environment variable. -This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the Loki's endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used). +This image also uses `LOKI_URL`, `LOKI_USERNAME`, and `LOKI_PASSWORD` environment variables to specify the the Loki endpoint, user, and password (you can leave the USERNAME and PASSWORD blank if they're not used). This image starts an instance of Fluentd that forwards incoming logs to the specified Loki URL. As an alternate, containerized applications can also use [docker driver plugin]({{< relref "../docker-driver" >}}) to ship logs without needing Fluentd. diff --git a/docs/sources/send-data/otel/native_otlp_vs_loki_exporter.md b/docs/sources/send-data/otel/native_otlp_vs_loki_exporter.md index caad222ba27d9..f2b5fa9f70a65 100644 --- a/docs/sources/send-data/otel/native_otlp_vs_loki_exporter.md +++ b/docs/sources/send-data/otel/native_otlp_vs_loki_exporter.md @@ -103,5 +103,5 @@ Taking the above-ingested log line, let us look at how the querying experience w ## What do you need to do to switch from LokiExporter to native OTel ingestion format? -- Point your OpenTelemetry Collector to Loki's native OTel ingestion endpoint as explained [here](https://grafana.com/docs/loki//send-data/otel/#loki-configuration). +- Point your OpenTelemetry Collector to the Loki native OTel ingestion endpoint as explained [here](https://grafana.com/docs/loki//send-data/otel/#loki-configuration). - Rewrite your LogQL queries in various places, including dashboards, alerts, starred queries in Grafana Explore, etc. to query OTel logs as per the new format. diff --git a/docs/sources/send-data/promtail/stages/limit.md b/docs/sources/send-data/promtail/stages/limit.md index e7a85f13bcd3a..c4612431b0f2a 100644 --- a/docs/sources/send-data/promtail/stages/limit.md +++ b/docs/sources/send-data/promtail/stages/limit.md @@ -14,7 +14,7 @@ The `limit` stage is a rate-limiting stage that throttles logs based on several ## Limit stage schema This pipeline stage places limits on the rate or burst quantity of log lines that Promtail pushes to Loki. -The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for Loki's distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config](https://grafana.com/docs/loki//configure/#limits_config). +The concept of having distinct burst and rate limits mirrors the approach to limits that can be set for the Loki distributor component: `ingestion_rate_mb` and `ingestion_burst_size_mb`, as defined in [limits_config](https://grafana.com/docs/loki//configure/#limits_config). ```yaml limit: diff --git a/docs/sources/setup/install/helm/concepts.md b/docs/sources/setup/install/helm/concepts.md index ca2a626421867..16c383d4feada 100644 --- a/docs/sources/setup/install/helm/concepts.md +++ b/docs/sources/setup/install/helm/concepts.md @@ -38,7 +38,7 @@ This chart installs the [canary]({{< relref "../../../operations/loki-canary" >} ## Gateway By default and inspired by Grafana's [Tanka setup](https://github.com/grafana/loki/blob/main/production/ksonnet/loki), the chart -installs the gateway component which is an NGINX that exposes Loki's API and automatically proxies requests to the correct +installs the gateway component which is an NGINX that exposes the Loki API and automatically proxies requests to the correct Loki components (read or write, or single instance in the case of filesystem storage). The gateway must be enabled if an Ingress is required, since the Ingress exposes the gateway only. If the gateway is enabled, Grafana and log shipping agents, such as Promtail, should be configured to use the gateway. diff --git a/docs/sources/setup/install/istio.md b/docs/sources/setup/install/istio.md index 74febca169460..38584be41a1a6 100644 --- a/docs/sources/setup/install/istio.md +++ b/docs/sources/setup/install/istio.md @@ -29,7 +29,7 @@ When you enable istio-injection on the namespace where Loki is running, you need ### Query frontend service -Make the following modifications to the file for Loki's Query Frontend service. +Make the following modifications to the file for the Loki Query Frontend service. 1. Change the name of `grpc` port to `grpclb`. This is used by the grpc load balancing strategy which relies on SRV records. Otherwise the `querier` will not be able to reach the `query-frontend`. See https://github.com/grafana/loki/blob/0116aa61c86fa983ddcbbd5e30a2141d2e89081a/production/ksonnet/loki/common.libsonnet#L19 and @@ -67,7 +67,7 @@ spec: ### Querier service -Make the following modifications to the file for Loki's Querier service. +Make the following modifications to the file for the Loki Querier service. Set the `appProtocol` of the `grpc` service to `tcp` @@ -103,7 +103,7 @@ spec: ### Ingester service and Ingester headless service -Make the following modifications to the file for Loki's Query Ingester and Ingester Headless service. +Make the following modifications to the file for the Loki Query Ingester and Ingester Headless service. Set the `appProtocol` of the `grpc` port to `tcp` @@ -137,7 +137,7 @@ spec: ### Distributor service -Make the following modifications to the file for Loki's Distributor service. +Make the following modifications to the file for the Loki Distributor service. Set the `appProtocol` of the `grpc` port to `tcp` diff --git a/docs/sources/setup/install/tanka.md b/docs/sources/setup/install/tanka.md index baccd14f3a9f7..043a3895892be 100644 --- a/docs/sources/setup/install/tanka.md +++ b/docs/sources/setup/install/tanka.md @@ -49,7 +49,7 @@ Revise the YAML contents of `environments/loki/main.jsonnet`, updating these var - Update the S3 or GCS variable values, depending on your object storage type. See [storage_config](/docs/loki//configuration/#storage_config) for more configuration details. - Remove from the configuration the S3 or GCS object storage variables that are not part of your setup. - Update the Promtail configuration `container_root_path` variable's value to reflect your root path for the Docker daemon. Run `docker info | grep "Root Dir"` to acquire your root path. -- Update the `from` value in the Loki `schema_config` section to no more than 14 days prior to the current date. The `from` date represents the first day for which the `schema_config` section is valid. For example, if today is `2021-01-15`, set `from` to `2021-01-01`. This recommendation is based on Loki's default acceptance of log lines up to 14 days in the past. The `reject_old_samples_max_age` configuration variable controls the acceptance range. +- Update the `from` value in the Loki `schema_config` section to no more than 14 days prior to the current date. The `from` date represents the first day for which the `schema_config` section is valid. For example, if today is `2021-01-15`, set `from` to `2021-01-01`. This recommendation is based on the Loki default acceptance of log lines up to 14 days in the past. The `reject_old_samples_max_age` configuration variable controls the acceptance range. ```jsonnet diff --git a/docs/sources/setup/upgrade/_index.md b/docs/sources/setup/upgrade/_index.md index 2d9b8e2a1a030..53298863a1878 100644 --- a/docs/sources/setup/upgrade/_index.md +++ b/docs/sources/setup/upgrade/_index.md @@ -494,7 +494,7 @@ only in 2.8 and forward releases does the zero value disable retention. The metrics.go log line emitted for every query had an entry called `subqueries` which was intended to represent the amount a query was parallelized on execution. -In the current form it only displayed the count of subqueries generated with Loki's split by time logic and did not include counts for shards. +In the current form it only displayed the count of subqueries generated with the Loki split by time logic and did not include counts for shards. There wasn't a clean way to update subqueries to include sharding information and there is value in knowing the difference between the subqueries generated when we split by time vs sharding factors, especially now that TSDB can do dynamic sharding. diff --git a/docs/sources/visualize/grafana.md b/docs/sources/visualize/grafana.md index 9a1ba98c8fd7f..3495afc24aac5 100644 --- a/docs/sources/visualize/grafana.md +++ b/docs/sources/visualize/grafana.md @@ -33,7 +33,7 @@ Modern Grafana versions after 6.3 have built-in support for Grafana Loki and [Lo 1. To see the logs, click Explore on the sidebar, select the Loki data source in the top-left dropdown, and then choose a log stream using the Log labels button. -1. Learn more about querying by reading about Loki's query language [LogQL]({{< relref "../query/_index.md" >}}). +1. Learn more about querying by reading about the Loki query language [LogQL]({{< relref "../query/_index.md" >}}). If you would like to see an example of this live, you can try [Grafana Play's Explore feature](https://play.grafana.org/explore?schemaVersion=1&panes=%7B%22v1d%22:%7B%22datasource%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bagent%3D%5C%22promtail%5C%22%7D%20%7C%3D%20%60%60%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22%7D,%22editorMode%22:%22builder%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1) @@ -43,7 +43,7 @@ search and filter for logs with Loki. ## Using Grafana Dashboards -Because Loki can be used as a built-in data source above, we can use LogQL queries based on that datasource +Because Loki can be used as a built-in data source above, we can use LogQL queries based on that data source to build complex visualizations that persist on Grafana dashboards. {{< docs/play title="Loki Example Grafana Dashboard" url="https://play.grafana.org/d/T512JVH7z/" >}}