-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add options to configure nodeSelector and tolerasions for KRR pod on k8s #1442
Comments
Hi @AlexLov, |
Oh, I somehow overlooked this page :( |
All good! Any idea where you looked in the docs/github? I'll make sure we add a link so it is more discoverable. |
I looked first into |
Is your feature request related to a problem?
I have KRR pods often killed by OOM in some big clusters (like 3000+ pods) while I can adjust memory request/limit of that pod it also starts on quite packed nodes dedicated for main workload and this adjustments to memory might interfere with it. For some side workloads like monitoring and related staff (like robusta) I have dedicated nodes with enough resources so they won't interfere with main workload even if they consume all the node's resources. I use nodeSelectors and tolerations to run all my services on these dedicated nodes and prevent main cluster's workload to be scheduled there.
Describe the solution you'd like
Please add options to configure nodeSelector and tolerations for KRR job or at least let them to be taken from robusta-runner pod itself.
Describe alternatives you've considered
There are none. I didn't find how to disable KRR pod to be run at all either.
The text was updated successfully, but these errors were encountered: