-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Reduce reconcilements #1359
🌱 Reduce reconcilements #1359
Conversation
3cc9b66
to
28c641c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two notes about the predicates, otherwise this looks good and a step in the right direction. Would love to have some better options for finding/debugging this from controller-runtime.
cc156c7
to
0cd4ab0
Compare
- Filter HCloudMachine events so that status updates of the resource don't trigger another reconcilement of HCloudMachine - Filter HetznerCluster events so that status updates of the resource don't trigger another reconcilement of HetznerCluster - Filter HetznerCluster events so that some status updates that are not important don't trigger reconcilement of HCloudMachine - Filter Machine events so that status updates of the resource don't trigger another reconcilement of HCloudMachine a - Filter Cluster events so that status updates of the resource don't trigger another reconcilement of HetznerCluster - Increase the time period to requeue if a server is starting - Increase the time period to requeue if a server has not status yet - Increase the time period to requeue if the kubeapi server is not yet reachable before adding it as target to the load balancer - Don't trigger multiple shutdown calls for one server and increase the timeout to requeue after a shutdown is triggered
0cd4ab0
to
be23ff4
Compare
@apricote what kind of debugging do you mean here? BTW, you can get the total reconcile counts per CRD:
|
I would love to have a flag in controller-runtime
Some transparent insights into Not something that you want running in our prod clusters, but something that can be enabled locally (or through some diagnostics API) to figure out why it reconciles more often then expected. |
One alternative would be distributed tracing for the Kubernetes API that includes any actions taken by controllers because of watches. But I guess that would be way harder and spans many more components. |
@apricote good questions. I have no answer to them, so I asked here to get more insights: https://kubernetes.slack.com/archives/C02MRBMN00Z/p1720524329698439 |
What this PR does / why we need it:
TODOs:
Side note:
I will add an issue for unit testing the functions I have added after this is getting merged.