-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Equal distribution of particles to MPI processors #1231
Closed
Closed
Commits on Aug 31, 2022
-
Configuration menu - View commit details
-
Copy full SHA for be039cb - Browse repository at this point
Copy the full SHA be039cbView commit details -
Equal distribution of particles to MPI processors
When running simulations in MPI mode involving the repeated release of small quantities of particles (e.g. 100 particles across 10 MPI processors), we would occasionally receive the error 'Cannot initialise with fewer particles than MPI processors'. The cause of this appeared to be the way that K-means was distributing the particles among the MPI processors. Specifically, K-means does not return clusters of a fixed or minimum size, thus it is highly possible that some MPI processors will be allocated fewer than the minimum number of required particles, especially if the number of particles is small and the number of MPI processors approaches the maximum for a given number of particles (e.g. 10 MPI processors is the maximum for 100 particles, as 100 particles / 10 processors = 10 particles per processor). To overcome this, we equally distribute the available particles among the MPI processors using Numpy's linspace method. We create a linearly spaced array that is the same length as the number particles. The array contains decimals ranging from 0 to the number of MPI processors, and that array is then rounded down so that the values represent integer indices to MPI processors. As before, this change requires that the user has requested at least the minimum required number of MPI processors, but it ensures that the minimum number is always sufficient.
Configuration menu - View commit details
-
Copy full SHA for 8b21bc9 - Browse repository at this point
Copy the full SHA 8b21bc9View commit details -
Equal distribution of particles to MPI processors
Related to pull request OceanParcels#1231. This change has only been tested for collectionsoa.py but not collectionaos.py. When running simulations in MPI mode involving the repeated release of small quantities of particles (e.g. 100 particles across 10 MPI processors), we would occasionally receive the error 'Cannot initialise with fewer particles than MPI processors'. The cause of this appeared to be the way that K-means was distributing the particles among the MPI processors. Specifically, K-means does not return clusters of a fixed or minimum size, thus it is highly possible that some MPI processors will be allocated fewer than the minimum number of required particles, especially if the number of particles is small and the number of MPI processors approaches the maximum for a given number of particles (e.g. 10 MPI processors is the maximum for 100 particles, as 100 particles / 10 processors = 10 particles per processor). To overcome this, we equally distribute the available particles among the MPI processors using Numpy's linspace method. We create a linearly spaced array that is the same length as the number particles. The array contains decimals ranging from 0 to the number of MPI processors, and that array is then rounded down so that the values represent integer indices to MPI processors. As before, this change requires that the user has requested at least the minimum required number of MPI processors, but it ensures that the minimum number is always sufficient.
Configuration menu - View commit details
-
Copy full SHA for e3b100d - Browse repository at this point
Copy the full SHA e3b100dView commit details
Commits on Sep 1, 2022
-
Equal distribution of particles to MPI processors
Following pull request OceanParcels#1231: Reinstated K-Means as the method of grouping particle locations. Rather than using the clusters assignments directly, we find the X nearest coordinates to each cluster centroid, where X is the minimum number of particles required to distribute the particles equally among MPI processors. The result is an (approximately) equal number of particles assigned to each MPI processor, with particles being grouped according to their spatial distribution.
Configuration menu - View commit details
-
Copy full SHA for b44e59c - Browse repository at this point
Copy the full SHA b44e59cView commit details -
Equal distribution of particles to MPI processors
Following pull request OceanParcels#1231: Reinstated K-Means as the method of grouping particle locations. Rather than using the clusters assignments directly, we find the X nearest coordinates to each cluster centroid, where X is the minimum number of particles required to distribute the particles equally among MPI processors. The result is an (approximately) equal number of particles assigned to each MPI processor, with particles being grouped according to their spatial distribution.
Configuration menu - View commit details
-
Copy full SHA for 11fdf50 - Browse repository at this point
Copy the full SHA 11fdf50View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.