-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to update globals on all nodes? #203
Comments
I don't quite understand what the question is, so likely this may not work: You may want to run another function (e.g., by creating another cluster, as done in MapReduce example) and have jobs on it update nodes (use |
@pgiri Thanks for your reply. I have read the MapReduce example in doc. But I think it may not what I want.For example, I have a var(maybe a counter) in client, could any node read/write it simultaneously(with a lock)? def compute(): if name == 'main': |
See |
@pgiri Thanks for your reply.I think the example in node_shvars.py could only share variables on a node(but not by jobs in OTHER NODES).So how to share variables on all nodes in the cluster?Thanks again. |
I am not sure I understand your question, but in case you are asking about replacing in-memory data, see latest release and example |
I know that dispy can updating globals by using multiprocessing module on one node.But I want to share globals which can be updated by any nodes in the cluster, any idea?
Thanks a lot.
The text was updated successfully, but these errors were encountered: