Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory consumption of daemons and client #41

Open
turicas opened this issue Feb 26, 2013 · 5 comments
Open

Reduce memory consumption of daemons and client #41

turicas opened this issue Feb 26, 2013 · 5 comments

Comments

@turicas
Copy link
Contributor

turicas commented Feb 26, 2013

Some users have reported that the daemons are consuming too much memory, even when there are no jobs (memory is not freed).

I think the problem is worse in Broker and PipelineManager, but we don't know yet what is causing this. Need to do some benchmarks and optimizations in the code.

Related to #39.

@turicas
Copy link
Contributor Author

turicas commented Feb 28, 2013

Probably worker processes on Broker need to be "refreshed" (killed and then re-started again) after execution of X jobs. As Python does not free memory for the operating system regularly when you destroy objects, the best way to do it is to kill the process.

Some time ago, all jobs were executed in fresh worker processes (see NAMD/pypln.backend@19aa104), then we shifted to this new approach: long-running worker process (they start when Broker starts and are killed when Broker is killed). Maybe we need a solution in the middle of these two.

@turicas
Copy link
Contributor Author

turicas commented Mar 8, 2013

These projects may help:

@fccoelho
Copy link
Member

fccoelho commented Mar 9, 2013

@turicas , See this, straight from the multiprocessing documentation:

New in version 2.7: maxtasksperchild is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool.

@turicas
Copy link
Contributor Author

turicas commented Mar 9, 2013

@fccoelho, thanks! Currently I'm not using multiprocessing.Pool (I've created my own Pool class) but I'll read the documentation carefully to decide if it's better to change to it or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants