You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I'm deploy the loki helm chart in my existing monitoring namespace. The monitoring namespace already contains loki-distributed and mimir-distributed helm chart. So there are already a few (50) pods running.
After deploying the loki helm-chart, I noticed that the gateway pods won't schedule.
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
After quick debug I noticed that the affinity rule only matches the component label:
Describe the bug
I'm deploy the loki helm chart in my existing monitoring namespace. The monitoring namespace already contains loki-distributed and mimir-distributed helm chart. So there are already a few (50) pods running.
After deploying the loki helm-chart, I noticed that the gateway pods won't schedule.
After quick debug I noticed that the affinity rule only matches the
component
label:loki/production/helm/loki/values.yaml
Lines 946 to 952 in 91a3486
The issue is that this is not enough. The mimir-distributed chart has a
component
withgateway
label as well, resulting in "no free nodes".To Reproduce
Steps to reproduce the behavior:
Expected behavior
The default matchLabels in podAntiAffinity should be more restricted. e.g. similar to the serviceMatchLabels.
The gateway service has more strict labels as you can see:
Environment:
Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.
The text was updated successfully, but these errors were encountered: