From a834fa5ce6116d06ada51f6bd76ae77d14c7ab61 Mon Sep 17 00:00:00 2001 From: Clayton Cornell <131809008+clayton-cornell@users.noreply.github.com> Date: Wed, 12 Jun 2024 07:25:06 -0700 Subject: [PATCH] Add alt text and fix broken image links (#6955) * Add alt text and fix broekn image links * Match case to page title --- docs/sources/flow/tasks/debug.md | 12 +++++------- .../flow/tasks/opentelemetry-to-lgtm-stack.md | 11 +++++------ .../integrations/cloudwatch-exporter-config.md | 12 ++++++------ 3 files changed, 16 insertions(+), 19 deletions(-) diff --git a/docs/sources/flow/tasks/debug.md b/docs/sources/flow/tasks/debug.md index 9b5dfb2cab90..69d4090f057b 100644 --- a/docs/sources/flow/tasks/debug.md +++ b/docs/sources/flow/tasks/debug.md @@ -65,7 +65,7 @@ Follow these steps to debug issues with {{< param "PRODUCT_NAME" >}}: ### Home page -![](../../assets/ui_home_page.png) +![The Agent UI home page showing a table of components.](/media/docs/agent/ui_home_page.png) The home page shows a table of components defined in the configuration file and their health. @@ -75,14 +75,14 @@ Click the {{< param "PRODUCT_ROOT_NAME" >}} logo to navigate back to the home pa ### Graph page -![](../../assets/ui_graph_page.png) +![The Graph page showing a graph view of components.](/media/docs/agent/ui_graph_page.png) The **Graph** page shows a graph view of components defined in the configuration file and their health. Clicking a component in the graph navigates to the [Component detail page](#component-detail-page) for that component. ### Component detail page -![](../../assets/ui_component_detail_page.png) +![The component detail page showing detailed information about the components.](/media/docs/agent/ui_component_detail_page.png) The component detail page shows the following information for each component: @@ -95,9 +95,9 @@ The component detail page shows the following information for each component: ### Clustering page -![](../../assets/ui_clustering_page.png) +![The Clustering page showing detailed information about each cluster node.](/media/docs/agent/ui_clustering_page.png) -The clustering page shows the following information for each cluster node: +The Clustering page shows the following information for each cluster node: * The node's name. * The node's advertised address. @@ -144,5 +144,3 @@ Some issues that appear to be clustering issues may be symptoms of other issues, for example, problems with scraping or service discovery can result in missing metrics for an agent that can be interpreted as a node not joining the cluster. {{< /admonition >}} - - diff --git a/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md b/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md index 031690fa1e95..632b5475c1df 100644 --- a/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md +++ b/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md @@ -165,7 +165,7 @@ loki.write "default" { To use Loki with basic-auth, which is required with Grafana Cloud Loki, you must configure the [loki.write](ref:loki.write) component. You can get the Loki configuration from the Loki **Details** page in the [Grafana Cloud Portal][]: -![](../../../assets/tasks/loki-config.png) +![The Loki Details page showing information about the Loki configuration.](/media/docs/agent/loki-config.png) ```river otelcol.exporter.loki "grafana_cloud_loki" { @@ -200,7 +200,7 @@ otelcol.exporter.otlp "default" { To use Tempo with basic-auth, which is required with Grafana Cloud Tempo, you must use the [otelcol.auth.basic](ref:otelcol.auth.basic) component. You can get the Tempo configuration from the Tempo **Details** page in the [Grafana Cloud Portal][]: -![](../../../assets/tasks/tempo-config.png) +![The Tempo Details page showing information about the Tempo configuration.](/media/docs/agent//tempo-config.png) ```river otelcol.exporter.otlp "grafana_cloud_tempo" { @@ -237,7 +237,7 @@ prometheus.remote_write "default" { To use Prometheus with basic-auth, which is required with Grafana Cloud Prometheus, you must configure the [prometheus.remote_write](ref:prometheus.remote_write) component. You can get the Prometheus configuration from the Prometheus **Details** page in the [Grafana Cloud Portal][]: -![](../../../assets/tasks/prometheus-config.png) +![The Prometheus Details page showing information about the Prometheus configuration.](/media/docs/agent/prometheus-config.png) ```river otelcol.exporter.prometheus "grafana_cloud_prometheus" { @@ -361,9 +361,9 @@ ts=2023-05-09T09:37:15.304109Z component=otelcol.receiver.otlp.default level=inf ts=2023-05-09T09:37:15.304234Z component=otelcol.receiver.otlp.default level=info msg="Starting HTTP server" endpoint=0.0.0.0:4318 ``` -You can now check the pipeline graphically by visiting http://localhost:12345/graph +You can now check the pipeline graphically by visiting -![](../../../assets/tasks/otlp-lgtm-graph.png) +![The Graph page showing a graphical representation of the pipeline.](/media/docs/agent/otlp-lgtm-graph.png) [OpenTelemetry]: https://opentelemetry.io [Grafana Loki]: https://grafana.com/oss/loki/ @@ -371,4 +371,3 @@ You can now check the pipeline graphically by visiting http://localhost:12345/gr [Grafana Cloud Portal]: https://grafana.com/docs/grafana-cloud/account-management/cloud-portal#your-grafana-cloud-stack [Prometheus Remote Write]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage [Grafana Mimir]: https://grafana.com/oss/mimir/ - diff --git a/docs/sources/static/configuration/integrations/cloudwatch-exporter-config.md b/docs/sources/static/configuration/integrations/cloudwatch-exporter-config.md index 6495625b76c8..7379895146cd 100644 --- a/docs/sources/static/configuration/integrations/cloudwatch-exporter-config.md +++ b/docs/sources/static/configuration/integrations/cloudwatch-exporter-config.md @@ -355,19 +355,19 @@ pick the ones you need. `length` controls how far back in time CloudWatch metrics are considered during each agent scrape. If both settings are configured, the time parameters when calling CloudWatch APIs work as follows: -![](https://grafana.com/media/docs/agent/cloudwatch-period-and-length-time-model-2.png) +![A diagram showing how the time parameters work when both period and length are configured.](/media/docs/agent/cloudwatch-period-and-length-time-model-2.png) -As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job, +As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job, the minimum of all periods, and maximum of all lengths is configured. -On the other hand, if `length` is not configured, both period and length settings are calculated based on +On the other hand, if `length` isn't configured, both period and length settings are calculated based on the required `period` configuration attribute. If all metrics within a job (discovery or static) have the same `period` value configured, CloudWatch APIs will be -requested for metrics from the scrape time, to `period`s seconds in the past. +requested for metrics from the scrape time, to `period`s seconds in the past. The values of these metrics are exported to Prometheus. -![](https://grafana.com/media/docs/agent/cloudwatch-single-period-time-model.png) +![A diagram showing how the time parameters work when a single period is configured.](/media/docs/agent/cloudwatch-single-period-time-model.png) On the other hand, if metrics with different `period`s are configured under an individual job, this works differently. First, two variables are calculated aggregating all periods: `length`, taking the maximum value of all periods, and @@ -375,7 +375,7 @@ the new `period` value, taking the minimum of all periods. Then, CloudWatch APIs `now - length` to `now`, aggregating each in samples for `period` seconds. For each metric, the most recent sample is exported to CloudWatch. -![](https://grafana.com/media/docs/agent/cloudwatch-multiple-period-time-model.png) +![A diagram showing how the time parameters work when multiple periods are configured.](/media/docs/agent/cloudwatch-multiple-period-time-model.png) ## Supported services in discovery jobs