Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[processor/k8sattributes] update internally stored pods with replicaset/deployment information #37088

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

bacherfl
Copy link
Contributor

@bacherfl bacherfl commented Jan 8, 2025

Description

This PR adapts the k8sattributes processor to update its internally stored pods with the information about their respective replica sets and deployments, after the replica set information is received via the informer.
This can happen during the initial sync of the processor, where it can happen that the replica set information for existing pods is received after the pods have already been received and stored internally.

Link to tracking issue

Fixes #37056

Testing

Added unit test

Documentation

@github-actions github-actions bot added the processor/k8sattributes k8s Attributes processor label Jan 8, 2025
@bacherfl bacherfl changed the title [receiver/k8sattributes] update internally stored pods with replicaset/deployment information [processor/k8sattributes] update internally stored pods with replicaset/deployment information Jan 8, 2025
Signed-off-by: Florian Bacher <[email protected]>
@bacherfl bacherfl marked this pull request as ready for review January 8, 2025 09:50
@bacherfl bacherfl requested a review from a team as a code owner January 8, 2025 09:50
@@ -1095,6 +1096,26 @@ func (c *WatchClient) addOrUpdateReplicaSet(replicaset *apps_v1.ReplicaSet) {
c.ReplicaSets[string(replicaset.UID)] = newReplicaSet
}
c.m.Unlock()

for _, pod := range c.Pods {
Copy link
Member

@ChrsMark ChrsMark Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can avoid looping over Pod's cache by waiting for all of the syncs to happen first that would be great. Left a comment back at the issue: #37056 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the initial review @ChrsMark! Currently the issue also with the wait_for_metadata is that during the initial sync, the pods already present in the system can arrive at the informer before their replicasets. However maybe there is a way to first wait for the initial replicaSet informer sync to be complete, before proceeding with the pod informer. I will try to see if this can be done, as I'm also not quite happy with the current solution of iterating over all pods yet.

@bacherfl bacherfl marked this pull request as draft January 9, 2025 05:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
processor/k8sattributes k8s Attributes processor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prometheus metrics missing k8s_deployment_name attribute for short period after agent restart
3 participants