Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query error Live tailing was stopped due to following error: undefined #11915

Open
justintaylor9 opened this issue Feb 9, 2024 · 3 comments
Open
Labels
type/bug Somehing is not working as expected upgrade

Comments

@justintaylor9
Copy link

justintaylor9 commented Feb 9, 2024

Describe the bug
We are running into a bug with Loki not allowing us to follow the logs live. This was working prior to upgrading to Loki v2.9.

Grafana v10.2.2
Loki v2.9.4

To Reproduce
Steps to reproduce the behavior:

  1. Connected Loki data source to Grafana with basic auth.
  2. Began viewing logs in "Explore" ... tried to live tail them.

Expected behavior
Loki should follow the logs in realtime.

Environment:

  • Infrastructure: bare-metal VMs
  • Deployment tool: Terraform/Ansible

We are running a bare-metal simple scalable deployment with nginx load-balancers being used for tenant authentication. After upgrading to Loki v2.9 we noticed this error when tailing the logs.

Nginx configuration:

client_body_buffer_size 100M;
client_max_body_size 100m;

ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key;

auth_basic 'Tenant login';
auth_basic_user_file /etc/nginx/.htpasswd;

map $remote_user $tenant {
"^grafana_read$" tenant1|tenant2|tenant3
"
^(.+?)(read|write)$" $1;
default $remote_user;
}
map $remote_user $target {
"~^.+?
(read|write)$" $1.test.loki.it.ufl.edu:443;
default read;
}

server {
large_client_header_buffers 8 128k;
listen 443 ssl;
resolver 128.227.30.254 ipv6=off;

access_log /var/log/nginx/access.log main;

location / {
    proxy_buffer_size 32k;
    proxy_buffering on;
    proxy_buffers 4 32k;
    proxy_connect_timeout 5;
    proxy_pass https://$target;
    proxy_read_timeout 610;
    proxy_send_timeout 610;
    proxy_set_header X-Scope-OrgID $tenant;
    proxy_set_header Connection Upgrade;
    proxy_set_header Upgrade websocket;
}

}

Any help is greatly appreciated.

@JStickler JStickler added type/bug Somehing is not working as expected upgrade labels Feb 12, 2024
@orlovmyk
Copy link

+1 on that, fresh install

@orlovmyk
Copy link

orlovmyk commented Feb 24, 2024

@justintaylor9 I've managed to fix the issue

TL;DR;
Check websockets configuration for grafana and check if you do have any errors in browser

More detailed answer
I am using grafana in k8s cluster as a a part of kube-prometheus-stack helm chart

apiVersion: v2
name: monitoring
version: 0.0.0
dependencies:
  - name: kube-prometheus-stack
    version: 56.9.0
    repository: https://prometheus-community.github.io/helm-charts

As ingress controller I am using official nginx one, which uses nginx.org/ annotations

apiVersion: v2
name: nginx-ingress
version: 0.0.0
dependencies:
  - name: nginx-ingress
    version: 1.1.3
    repository: https://helm.nginx.com/stable

So in my particular case in order to fix this I've added such line to annotations in values.yaml of kube-prometheus-stack helm chart:

kube-prometheus-stack:
  grafana:
    ingress:
      enabled: true
      ingressClassName: nginx
      annotations:
        nginx.org/websocket-services: "monitoring-grafana"

@valyala
Copy link

valyala commented Nov 2, 2024

This looks like a duplicate for #7153 .

P.S. If you are struggling with live tailing issues in Grafana Loki, then try live tailing in VictoriaLogs - https://docs.victoriametrics.com/victorialogs/querying/vlogscli/#live-tailing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Somehing is not working as expected upgrade
Projects
None yet
Development

No branches or pull requests

4 participants