You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When multiple cards are present on the system, and pods request full cards (ie millicores=1000), GAS will place pods on the same node regardless of whether podAntiAffinity is set to prefer nodes that pods are not placed on first.
To Reproduce
Configure GAS and GPU Plugin with sharedDevNum > 1 and resource managment true
set millicores: 1000, i915: 1, enable secondary scheduler
GAS schedules pods to cards on same node, ignoring podAntiAffinity rule
Expected behavior
GAS should use the podAntifAffinity rule to schedule pods to other nodes before choosing cards on the same node. With default scheduler, this behavior is observed.
Logs
If applicable, add the relevant logs to help explain your problem.
Please consider adding at least the logs from the kube-scheduler and telemetry-aware-scheduling pods (if installed).
kube-scheduler logs
telemetry-aware-scheduling logs
Environment (please complete the following information):
Let us know what K8s version, distribution, and if you are deploying in BM, VM, or within a Cloud provider.
Baremetal OpenShift 4.13.11
DevicePlugins v0.28
GAS v0.5.5-0-g50d1879
Additional context
If relevant, add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
GAS doesn't take any pod affinity rules into consideration, and it shouldn't, as there is another scheduler component (plugin) responsible for filtering nodes that would violate pod affinity rules.
You mention that you are running a secondary scheduler. May I ask how you have configured your secondary scheduler plugins? The config is typically given as a command line parameter via a mounted file. If the scheduler is running with a sufficiently high log level, it will also print the config at startup.
Describe the bug
When multiple cards are present on the system, and pods request full cards (ie
millicores=1000
), GAS will place pods on the same node regardless of whetherpodAntiAffinity
is set to prefer nodes that pods are not placed on first.To Reproduce
millicores: 1000
,i915: 1
, enable secondary schedulerpodAntiAffinity
with similar:podAntiAffinity
ruleExpected behavior
GAS should use the
podAntifAffinity
rule to schedule pods to other nodes before choosing cards on the same node. With default scheduler, this behavior is observed.Logs
If applicable, add the relevant logs to help explain your problem.
Please consider adding at least the logs from the kube-scheduler and telemetry-aware-scheduling pods (if installed).
kube-scheduler logs
telemetry-aware-scheduling logs
Environment (please complete the following information):
Let us know what K8s version, distribution, and if you are deploying in BM, VM, or within a Cloud provider.
Baremetal OpenShift 4.13.11
DevicePlugins v0.28
GAS v0.5.5-0-g50d1879
Additional context
If relevant, add any other context about the problem here.
The text was updated successfully, but these errors were encountered: