Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelisation and optimisation #6

Open
johnomotani opened this issue Apr 15, 2020 · 4 comments
Open

Parallelisation and optimisation #6

johnomotani opened this issue Apr 15, 2020 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed
Milestone

Comments

@johnomotani
Copy link
Collaborator

In principle hypnotoad2 is embarassingly parallelisable. The most expensive part tends to be refining the FineContours (at least when finecontour_Nfine is large), and every FineContour is independent. However, functions are not pickle-able which I think makes using multiprocessing tricky. There might be workarounds for this though https://medium.com/@emlynoregan/serialising-all-the-functions-in-python-cd880a63b591.

Other parts of the code could be parallelised, though it might not be worth it as probably they do not typically take long anyway. For example most if not all of the output fields (metric coefficients, etc.) are independent and could be calculated simultaneously. dask might be able to do this with minimal changes (possibly convert arrays to dask arrays, and then do

g11, g22, g33, ... = dask.compute(g11, g22, ...)

just before writing them out.

@johnomotani johnomotani added this to the Release 1.0 milestone Apr 15, 2020
@johnomotani johnomotani added the enhancement New feature or request label Apr 16, 2020
@johnomotani
Copy link
Collaborator Author

There may also be operations that could be significantly optimized by using methods from shapely (https://pypi.org/project/Shapely/) or matplotlib. Finding intersections with the wall is one place. If we added something to check whether all generated points are within the wall, that would be another (e.g. using matplotlib.path.Path.contains_points).

@johnomotani johnomotani changed the title Parallelisation Parallelisation and optimisation Apr 17, 2020
@johnomotani
Copy link
Collaborator Author

johnomotani commented Apr 21, 2020

In the workflow for choosing input options for the grid (e.g. using the gui #17), non-orthogonal grids are likely to be slow. It should in principle be possible to cache some of the slower parts of those calculations (e.g. intersections with the wall) and not repeat them when options are changed which do not affect them: for example, the poloidal spacing options should not change the FineContour objects, so rebuilding the grid after changing poloidal spacing options should actually be pretty quick if the interface was available to start from the existing objects.

It would be very nice to guarantee that when using cached results, the final output is identical to that produced if producing a grid directly from an input file with the final options - possibly when writing out the gridfile, we should delete the Mesh and create a new one from scratch?

Edit: possibility to rebuild the non-orthogonal grid after changing some spacing options while reusing as much as possible was implemented in #26. Issue of how to guarantee final output is the same as it would be for a run straight from an input file with the final options is still open.

@d7919
Copy link
Member

d7919 commented Apr 21, 2020

joblib might be a handy tool as it tends to work quite well for parallelising tasks and also offers some form of automatic caching (although I've not used that part of it).

@johnomotani
Copy link
Collaborator Author

I suspect the thing using the most time is PsiContour.refinePoint(). At least some of the methods could be Cythonized, e.g. PsiContour.refinePointNewton(), although I'm not sure that methods that rely on scipy.integrate.solve_ivp (PsiContour.refinePointIntegrate()) or scipy.optimize.brentq (PsiContour.refinePointLinesearch()) Cythonize directly, or if a replacement for the scipy methods would be needed. If that can be done, it might also be worth Cythonizing (parts of) FineContour, especially FineContour.equaliseSpacing().

@johnomotani johnomotani added the help wanted Extra attention is needed label Jan 26, 2021
@johnomotani johnomotani mentioned this issue May 11, 2021
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants