Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raytracer does not use scipy.integrate.quad for attenuation #722

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

christophwelling
Copy link
Collaborator

Using scipy.integrate.quad to calculate the attenuation can occasionally cause problems.
With this PR, the integral will by default be calculated using a simple Rieman sum over a fixed number of point. While this is a bit less precise, the difference is far smaller than the uncertainty on the attenuation length itself.

Comment on lines 130 to 131
(True or False) if you want to use the optimization. (Default: None, i.e., use optimization if ice model is
listed in speedup_attenuation_models)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc string needs to be adjusted

Comment on lines +150 to 153
if overwrite_speedup is None:
self._use_optimized_calculation = True
else:
self._use_optimized_calculation = overwrite_speedup
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could be written as one-liner self._use_optimized_calculation = True if overwrite_speedup is None else overwrite_speedup

@fschlueter
Copy link
Collaborator

@christophwelling can you finish that?

@sjoerd-bouma
Copy link
Collaborator

I don't want to get too involved with this - probably simplifying and unifying the way the attenuation length is calculated in Python is a good idea, and I think this should get merged once the docstring is updated, but I'm not aware of any cases where the current implementation causes errors.

The 'bigger' problem is in the C++ backend (#541), where the integration fails silently because of much stricter convergence limits than in the Python case. I believe @EnriqueHuesca has a solution already (other than just relaxing the convergence requirement), which I hope he will contribute soon : )

Base automatically changed from refactor to develop November 18, 2024 09:58
@cg-laser
Copy link
Collaborator

Hi everyone, I justed merged #660 which makes the python version (using numba) as fast as the C++ implementation (except for the attenuation length calculation).
My suggestion is that we make python/numba + the numerical attenuation length calculation (this PR) the new default.
We could even retire the C++ implementation which always had more problems in finding solutions than the Python implementation.

However, there is one important thing that we need to check for! @christophwelling argued that an accuracy of 10% would be acceptable because the uncertainty of the attenuation length is not better. This is true, BUT we need much, much smaller relative error. When we calculate the attenuation for two rays to two nearby antennas, the result should be identical. In general, for one event, the relative error between the paths to different antennas needs to be on the <1% level, because we use the amplitude differences in a reconstruction to determine the viewing angle and vertex distance.

@christophwelling can you merge the recent develop into your branch and implement such a test?

@christophwelling
Copy link
Collaborator Author

attenuation_integration
I already checked it when I implemented this. The difference between the integration methods is much smaller than 1%. The MB model looks a bit weird, but that's just because the attenuation length parametrization approaches 0 when you get too deep. But since this is much deeper than the MB ice, this should never happen anyway.

@christophwelling
Copy link
Collaborator Author

I commited a fix for the error in the raytracing test. The test still fails, but I think that's just because it now defaults to a different way of calculating the integral.
If I remove the part where the raytracer integrates the attenuation around the path inflection point using quad, the fraction of attenuation lenghts that are deviating from the reference actually drops to 0.5%, so I think this is just due to the fact that the C++ raytracer does not use quad to calculate the attenuation integral.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants