You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some users have reported that the daemons are consuming too much memory, even when there are no jobs (memory is not freed).
I think the problem is worse in Broker and PipelineManager, but we don't know yet what is causing this. Need to do some benchmarks and optimizations in the code.
Probably worker processes on Broker need to be "refreshed" (killed and then re-started again) after execution of X jobs. As Python does not free memory for the operating system regularly when you destroy objects, the best way to do it is to kill the process.
Some time ago, all jobs were executed in fresh worker processes (see NAMD/pypln.backend@19aa104), then we shifted to this new approach: long-running worker process (they start when Broker starts and are killed when Broker is killed). Maybe we need a solution in the middle of these two.
New in version 2.7: maxtasksperchild is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool.
@fccoelho, thanks! Currently I'm not using multiprocessing.Pool (I've created my own Pool class) but I'll read the documentation carefully to decide if it's better to change to it or not.
Some users have reported that the daemons are consuming too much memory, even when there are no jobs (memory is not freed).
I think the problem is worse in
Broker
andPipelineManager
, but we don't know yet what is causing this. Need to do some benchmarks and optimizations in the code.Related to #39.
The text was updated successfully, but these errors were encountered: