-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelisation and optimisation #6
Comments
There may also be operations that could be significantly optimized by using methods from shapely (https://pypi.org/project/Shapely/) or matplotlib. Finding intersections with the wall is one place. If we added something to check whether all generated points are within the wall, that would be another (e.g. using |
In the workflow for choosing input options for the grid (e.g. using the gui #17), non-orthogonal grids are likely to be slow. It should in principle be possible to cache some of the slower parts of those calculations (e.g. intersections with the wall) and not repeat them when options are changed which do not affect them: for example, the poloidal spacing options should not change the It would be very nice to guarantee that when using cached results, the final output is identical to that produced if producing a grid directly from an input file with the final options - possibly when writing out the gridfile, we should delete the Edit: possibility to rebuild the non-orthogonal grid after changing some spacing options while reusing as much as possible was implemented in #26. Issue of how to guarantee final output is the same as it would be for a run straight from an input file with the final options is still open. |
joblib might be a handy tool as it tends to work quite well for parallelising tasks and also offers some form of automatic caching (although I've not used that part of it). |
I suspect the thing using the most time is |
In principle hypnotoad2 is embarassingly parallelisable. The most expensive part tends to be refining the
FineContour
s (at least whenfinecontour_Nfine
is large), and everyFineContour
is independent. However, functions are not pickle-able which I think makes usingmultiprocessing
tricky. There might be workarounds for this though https://medium.com/@emlynoregan/serialising-all-the-functions-in-python-cd880a63b591.Other parts of the code could be parallelised, though it might not be worth it as probably they do not typically take long anyway. For example most if not all of the output fields (metric coefficients, etc.) are independent and could be calculated simultaneously.
dask
might be able to do this with minimal changes (possibly convert arrays to dask arrays, and then dojust before writing them out.
The text was updated successfully, but these errors were encountered: