-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel simulation of multiple neuron groups inside brainpy.DSRunner #490
Comments
While, parallel simulations somehow have many things to say. Maybe we need a tutorial about how to run one BrainPy model parallelly.
A tutorial will come out, maybe next month? |
Hi, thank you for the reply. I understand that implementing parallel simulation of one BrainPy model is nontrivial. Could you let me know which function (or module) I should use? I can look into it by myself. Or does it require a major revision of the Thanks! |
Dear @CloudyDory , We are still working on the examples for parallel simulations. Tutorials are on the way. We will release these supports in the next week. |
Hi @chaoming0625 , is there any proceeding about the parallel issue? In the documentation, I can only find how to use parallel simulation for parameter exploration (multi-task), but how can I use it for a single task, just like space decomposition? |
I have written a biological neural network that contains several groups of neurons and synapses in Brainpy. However, during simulation, it seems that the code cannot fully utilize all CPU cores. The code looks like this:
I think updating each neuron group and synapse group in python for-loop is slow, In theory, the neuron and synapse groups can be run in parallel. I have tried modified the
update
function as this using the Pythonthreading
module:or this:
But from the task manager (in Ubuntu 22.04), I find that the process is still running on a single core. Is there a way to run multiple neuron groups in a network in parallel in Brainpy?
Thanks!
The text was updated successfully, but these errors were encountered: