-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis memory leak risk in PhysicalEntityCacheUpdater #342
Labels
Comments
one of the RCA of sonic-net/sonic-buildimage#20680 |
Detailed RCA of this issue in the comments of #343 |
SuvarnaMeenakshi
pushed a commit
that referenced
this issue
Dec 18, 2024
- What I did Fixes the redis memory leak bug: #342 There's chance that the physical_entity_updater creates subscriptions to redis, and never consume the messages due to exceptions. Then the memory buffer(omem) of redis client starts to increase and never end, redis memory leaks. The reason is all 5 physical entity cache updaters inherit from PhysicalEntityCacheUpdater. In the first update_data, they initialize the psubscription to redis database. self.pub_sub_dict[db_index] = mibs.get_redis_pubsub(db, db.STATE_DB, self.get_key_pattern()) And everytime when the update_data is called again, it get the message from the psub and process. msg = pubsub.get_message() And outside, in the logic of the MIBUpdater, it calls update_data more frequently than reinit_data. A side-effect is, if reinit_data failed forever, the update_counter will not been cleaned, then update_data will not be called forever. self.update_counter = 0 So the problem is, at the begining, the psub is created at the first update_data and all things work well, until an unrecoverable issue happened, PHYSICAL_ENTITY_INFO|PSU * missing in the database (it's a pmon issue) This causes both reinit_data and update_data to be failed, because all of them finally call _update_per_namespace_data, which tries to cast an empty string '' to int and raises ValueError. Then the update_data is not called forever, because reinit_data will never success. But the previously established psubscription is still there, and no one gonna to consume it(the update_data is blocked), then Redis database memory starts to slowly leak. - How I did it Catch the exception during the loop of reinit_data, make sure the reinit_data of every physical_entity_updater will be called Clear message and cancel the subscription in the reinit_data, avoid the message accumulates in the redis subscription queue - How to verify it Tested on Cisco chassis, the memory is not leaking anymore.
mssonicbld
pushed a commit
to mssonicbld/sonic-snmpagent
that referenced
this issue
Dec 18, 2024
) - What I did Fixes the redis memory leak bug: sonic-net#342 There's chance that the physical_entity_updater creates subscriptions to redis, and never consume the messages due to exceptions. Then the memory buffer(omem) of redis client starts to increase and never end, redis memory leaks. The reason is all 5 physical entity cache updaters inherit from PhysicalEntityCacheUpdater. In the first update_data, they initialize the psubscription to redis database. self.pub_sub_dict[db_index] = mibs.get_redis_pubsub(db, db.STATE_DB, self.get_key_pattern()) And everytime when the update_data is called again, it get the message from the psub and process. msg = pubsub.get_message() And outside, in the logic of the MIBUpdater, it calls update_data more frequently than reinit_data. A side-effect is, if reinit_data failed forever, the update_counter will not been cleaned, then update_data will not be called forever. self.update_counter = 0 So the problem is, at the begining, the psub is created at the first update_data and all things work well, until an unrecoverable issue happened, PHYSICAL_ENTITY_INFO|PSU * missing in the database (it's a pmon issue) This causes both reinit_data and update_data to be failed, because all of them finally call _update_per_namespace_data, which tries to cast an empty string '' to int and raises ValueError. Then the update_data is not called forever, because reinit_data will never success. But the previously established psubscription is still there, and no one gonna to consume it(the update_data is blocked), then Redis database memory starts to slowly leak. - How I did it Catch the exception during the loop of reinit_data, make sure the reinit_data of every physical_entity_updater will be called Clear message and cancel the subscription in the reinit_data, avoid the message accumulates in the redis subscription queue - How to verify it Tested on Cisco chassis, the memory is not leaking anymore.
mssonicbld
pushed a commit
that referenced
this issue
Dec 18, 2024
- What I did Fixes the redis memory leak bug: #342 There's chance that the physical_entity_updater creates subscriptions to redis, and never consume the messages due to exceptions. Then the memory buffer(omem) of redis client starts to increase and never end, redis memory leaks. The reason is all 5 physical entity cache updaters inherit from PhysicalEntityCacheUpdater. In the first update_data, they initialize the psubscription to redis database. self.pub_sub_dict[db_index] = mibs.get_redis_pubsub(db, db.STATE_DB, self.get_key_pattern()) And everytime when the update_data is called again, it get the message from the psub and process. msg = pubsub.get_message() And outside, in the logic of the MIBUpdater, it calls update_data more frequently than reinit_data. A side-effect is, if reinit_data failed forever, the update_counter will not been cleaned, then update_data will not be called forever. self.update_counter = 0 So the problem is, at the begining, the psub is created at the first update_data and all things work well, until an unrecoverable issue happened, PHYSICAL_ENTITY_INFO|PSU * missing in the database (it's a pmon issue) This causes both reinit_data and update_data to be failed, because all of them finally call _update_per_namespace_data, which tries to cast an empty string '' to int and raises ValueError. Then the update_data is not called forever, because reinit_data will never success. But the previously established psubscription is still there, and no one gonna to consume it(the update_data is blocked), then Redis database memory starts to slowly leak. - How I did it Catch the exception during the loop of reinit_data, make sure the reinit_data of every physical_entity_updater will be called Clear message and cancel the subscription in the reinit_data, avoid the message accumulates in the redis subscription queue - How to verify it Tested on Cisco chassis, the memory is not leaking anymore.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
Steps to reproduce the issue:
PHYSICAL_ENTITY_INFO|PSU 2
, this will cause ValueError, because it tries to cast a empty string to int valueDescribe the results you received:
The redis memory gradually increase, have to restart snmp docker container to clear it.
Describe the results you expected:
Redis memory should not leak.
Additional information you deem important (e.g. issue happens only occasionally):
The text was updated successfully, but these errors were encountered: