-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] removes memory leak issue by removing c code and using numpy/scipy functions #9
[FIX] removes memory leak issue by removing c code and using numpy/scipy functions #9
Conversation
Is this ready for consideration? Has the patch been tested? |
This is still a draft/WIP, I still need to write tests (including regression tests to assess that the results are consistent with what was previously generated). |
This has merge conflicts, likely due to all the style changes. It looks like nothing similar has been integrated via another PR, can you tidy this PR by not including the style changes? |
afca423
to
eb2d988
Compare
I rebased removing the commit with aggressive black formatting, there are still empty spaces deletions that my editor automatically performed, but that should me more readable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a pretty straightforward improvement. Do you have benchmarks? Or would you just time the regression test before/after the refactor?
@coalsont If this is accepted and extensions no longer need to be compiled, I will open a PR to update the CI to only build a single, pure Python wheel before you cut a new release.
Looks like the test may be looking in the wrong folder for the coefficients file when run as a github action. |
The test data doesn't get packaged by default. You should be able to add this to [options.package_data]
gradunwarp.core.tests = data/* Note that this will increase package size from about 16k to 1.1M. We could split out the |
Thanks for the in depth code review, there are indeed a lot of suboptimal parts in the original code which is itself somewhat hard to read.
I had timed it interactively and it was equivalent if not faster, but I have no actual values. It seems like the C extension were created a long time ago when scipy/numpy equivalent did not exists. We can expect numpy/scipy code to be of better quality/performance/robustness.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some packaging notes.
If you have more suggestions on making the regression test better, or understanding why using scipy legendre give difference at the 5th decimal, I am all ears. Ideally we should try to decrease that discrepancy, but maybe that's impossible. |
float32 has 7.22 decimal digits of precision. If your values are >10, the 7th digit will be the fifth digit after the decimal, which is what What if you try |
Thanks for your input.
Just the fact of not inefficiently computing I guess it depends on what it is we want to achieve, I would say we want close results, but how close do we want it. Also that displacement value in mm is to be considered in the context of MRI resolution. When interpolating data using such field it will likely result in very minor differences too. |
This is not my project, so I can't say what the threshold should be for an acceptable change. If I were, I think my response would be:
Given the suggestion that changes in displacements of ~1e-5mm are going to have a negligible impact on the actual interpolation, perhaps an additional regression test would be to correct some image. A 1mm^3 resolution MNI template, for example, would allow you to both visually assess and quantify the change in distortion. If that image is But all this is the call of the maintainers. @coalsont @glasserm WDYT? |
@mharms thoughts on simplifying the gradunwarp code in future releases in a way that slightly changes the produced warpfield displacements by up to 30 nanometers? I think that should be fine, but I'm more worried that Siemens may not be satisfied with how the dummy test coefficients file was constructed (if it is insufficiently different from the real one used as a starting point, such that it might still reveal things about the construction of their coils). |
Yes, I agree that the tolerances being discussed here seem fine, and the differences being discussed not relevant from a practical perspective. I can't comment on the "test" coefficient file, either whether it is "insufficiently different", or conversely whether is it so different from the specification of a real gradient coil that the resulting regression test is actually not meaningful. |
Regarding the coefficient file, I guess we could try to make one from scratch. While I was trying to reimplement the coeff parser https://github.com/Washington-University/gradunwarp/blob/master/gradunwarp/core/coeffs.py#L119 to be more robust (using regexp to read lines) I am really puzzled about this lines https://github.com/Washington-University/gradunwarp/blob/master/gradunwarp/core/coeffs.py#L138-L144 that do skip a number of coefficient in the dummy file (which has the same structure to the one from our scanner). Thanks for your insight! |
1ae6650
to
ccfcead
Compare
I wrote a dummier grad file, way less coefficients (low order of spherical harmonics I guess), faster to test as well. |
That does look much less related to an actual scanner, thanks. The history rewrite appears to have diverged at 1.2.1 and made copies of many master commits, while current master does not appear to have the coefficients file in question (and no other commit in any branch appears to create any |
Co-authored-by: Chris Markiewicz <[email protected]>
…unction when possible
ccfcead
to
4a3c420
Compare
rebase done. |
Rebase looks proper, code looks good at a glance. Looks like setup.py is still unhappy, seems to contain remnants of the C code modules. |
Looks ready to merge, but I'll give it a day for any other opinions. |
Fixes #6
The "optimized" c-code was not performing any faster than numpy/scipy functions.
TODO: