-
Notifications
You must be signed in to change notification settings - Fork 87
Question: run-time registration of handlers? #378
Comments
I am not sure that I got the idea right. So the answer may be wrong or irrelevant. But this is how I would approach it: Since there are pods involved, there is a need for pod-handler. Since not all pods should be involved, we have to filter them. Since the criteria of filtering are quite sophisticated, I would use callbacks: import kopf
def does_it_match(**_) -> bool:
return True
@kopf.on.event('', 'v1', 'pods', when=does_it_match)
def pod_event(**_):
pass So, at this moment, all pods in the cluster/namespace will be intercepted. Now, we need to narrow the criteria. Since there is a selector in a CR, I would keep that global state of all selectors in memory, mapping to the original CRs they came from: from typing import MutableMapping, Mapping
import kopf
SelectorKey = Tuple[str, str] # (namespace, name)
SelectorLabels = Mapping[str, str]
SELECTORS: MutableMapping[SelectorKey, SelectorLabels] = {}
@kopf.on.create('zalando.org', 'v1', 'kopfexamples')
@kopf.on.resume('zalando.org', 'v1', 'kopfexamples')
@kopf.on.update('zalando.org', 'v1', 'kopfexamples') # optionally
def cr_appears(namespace, name, spec, **_):
key = (namespace, name)
SELECTORS[key] = spec.get('selector', {})
@kopf.on.delete('zalando.org', 'v1', 'kopfexamples')
def cr_disappears(spec, **_):
key = (namespace, name)
try:
del SELECTORS[key]
except KeyError:
pass So, at this point, we would have data for filtering the pods. Now, I would actually filter in that function above: def does_it_match(labels: Mapping[str, str], **_) -> bool:
for (namespace, name), selector_labels in SELECTORS.items():
if all(labels.get(key) == val for key, val in selector_labels.items()):
return True
return False Now, the pods that do not match any known selector, will be silently ignored. Notice: they will get into the sight of the operator itself — in one and only one watch-stream — but will be filtered out in the earliest stages, with no logs produced (silently). This is a difference here from your suggested approach: instead of having N watch-stream with labels in the URL (where N is the number of CRs with selectors), there will be one and only one watch-stream (and therefore TCP/HTTP/API connection), seeing all the pods, and just picking those of our interest, and ignoring others. This will easy the API side, but will put some CPU load to the operator. The RAM footprint will be minimal, though not zero: every pod will spawn its own worker task (asyncio.Task), where the pod events will be routed to, and almost instantly ignored; but the tasks are objects too — on a cluster with thousands of pods this can be noticed. As a continuation, using the same The downside here is that you have to keep some state in memory — for all the CRs, or all the pods, or all of something, depending on which of them you expect to be the least memory consuming. I am not yet sure if it is possible to solve the cross-resource communication in any other way: when an event happens on a pod, no events happen on the CRs, so we have nothing to "join". You either scan your own in-memory state, or K8s's in-memory state via the K8s API on every pod event (costly!). But the up-to-date state must be somewhere there. PS: The typing thing is fully optional, and is ignored at runtime. I just got a habit of using it for clarity. |
I still think the dynamic handler registration is a bit more convenient, but you are right that it is less scalable than the approach you outlined. However, with your approach, it might happen that a pod is created (and generates no events afterwards) before the corresponding CR appears in the system or vice versa, so I think the operator should store all the relevant info for both CRs and pods. This is doable but gets complicated when the CRD contains a namespace field in addition to the selector field. At any rate, I'm going to ignore the namespace field in my operator and go with your idea. Thank you for enlightening me. |
Question
I have a CRD with a selector field that defines a set of pods on which my operator should configure something. Additionally, it is possible to have two custom resources configuring different parts of the same pod. That is the intersection of the selectors can be non-empty.
Without using kopf, I would create a watch on pods for each custom resource and write an event handler for the watchers. The handler would somehow receive the event and the name of custom resource it belongs to. It seems kopf does not support this approach.
So can you, please, tell me how I can implement an operator for this problem with kopf? Thank you.
(I think #317 is somewhat similar, but not applicable here.)
Checklist
Keywords
I basically read all the titles of open issues, pull requests and the documentation from cover to cover.
The text was updated successfully, but these errors were encountered: