Replies: 1 comment 1 reply
-
All MPI documentation seem to assume that all jobs run the same binary, all of which are started using a utility such as This is, of course, not ideal for our use case since the orchestrator, runners, and individual processors are all completely separate binaries. We would need to figure out a way to run to start two processes independently while still facilitating MPI communication between them. I would assume that this would have to be a possibility, since some MPI clusters must span different CPU architectures, thus requiring a separate binary. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The Message Passing Interface (MPI) is a candidate for future communication protocol support. It is a low level, high performant specification used mainly in High Performance Computing settings. It's most popular implementation is OpenMPI for C/C++, but other implementations exist.
Beta Was this translation helpful? Give feedback.
All reactions