Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to update globals on all nodes? #203

Open
tigermask1978 opened this issue Dec 5, 2019 · 5 comments
Open

How to update globals on all nodes? #203

tigermask1978 opened this issue Dec 5, 2019 · 5 comments

Comments

@tigermask1978
Copy link

tigermask1978 commented Dec 5, 2019

I know that dispy can updating globals by using multiprocessing module on one node.But I want to share globals which can be updated by any nodes in the cluster, any idea?
Thanks a lot.

@pgiri
Copy link
Owner

pgiri commented Dec 10, 2019

I don't quite understand what the question is, so likely this may not work: You may want to run another function (e.g., by creating another cluster, as done in MapReduce example) and have jobs on it update nodes (use submit_node on each node to update nodes).

@tigermask1978
Copy link
Author

tigermask1978 commented Dec 11, 2019

@pgiri Thanks for your reply. I have read the MapReduce example in doc. But I think it may not what I want.For example, I have a var(maybe a counter) in client, could any node read/write it simultaneously(with a lock)?

def compute():
# How to read and update the COUNTER here(maybe need a lock)?
COUNTER += 1
return 0

if name == 'main':
import dispy
# Here is a COUNTER I want to share to all nodes.
COUNTER = 100
cluster = dispy.JobCluster(compute)
jobs = []
for i in range(20):
job = cluster.submit()
jobs.append(job)
for job in jobs:
job() # waits for job to finish and returns results
stdout = job.stdout
print(stdout)
cluster.print_status() # shows which nodes executed how many jobs etc.

@pgiri
Copy link
Owner

pgiri commented Dec 15, 2019

See node_shvars.py in examples.

@tigermask1978
Copy link
Author

@pgiri Thanks for your reply.I think the example in node_shvars.py could only share variables on a node(but not by jobs in OTHER NODES).So how to share variables on all nodes in the cluster?Thanks again.

@pgiri
Copy link
Owner

pgiri commented Mar 15, 2020

I am not sure I understand your question, but in case you are asking about replacing in-memory data, see latest release and example replace_inmem.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants