-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adjust tolerances for coron registration #8717
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #8717 +/- ##
=======================================
Coverage 61.75% 61.76%
=======================================
Files 377 377
Lines 38749 38743 -6
=======================================
- Hits 23931 23929 -2
+ Misses 14818 14814 -4 ☔ View full report in Codecov by Sentry. |
9fa9d38
to
ac82d57
Compare
I wonder if it might be preferable to select an image region for the comparison that excludes the very noisy (and scientifically unimportant) edges, that way we can keep the tolerances lower |
I think there are multiple issues here:
The psfmask documentation describes the
However, looking at one of the nircam reference files: https://jwst-crds.stsci.edu/browse/jwst_nircam_psfmask_0212.fits jwst/jwst/coron/imageregistration.py Line 78 in be36c54
For the above reasons I attempted to avoid algorithmic changes in this PR. |
ac82d57
to
26f40c7
Compare
calwebb_coron3 | ||
-------------- | ||
|
||
- Tighten tolerance of psf alignment. [#8717] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tolerances for the fit are tighter. ftol and xtol for leastsq default to 1.49012e-08. This PR tightens them to 1E-15.
The tolerances for the tests are loosened but as that's a test change I didn't note it in the changelog.
Yes, my mistake. I was thinking of the tolerances for the comparisons |
Would it be helpful to reword the changelog? Maybe something like:
|
Probably not worth the effort, unless you feel the original entry was inadequate |
This will need an okified regtest run. I see @tapastro is running one now and I'll try to queue one up after that finishes. |
Regtest run for okifying started here: https://plwishmaster.stsci.edu:8081/job/RT/job/JWST/3038/ |
The coron3 differences in the above linked run were okified. The unrelated differences were skipped. |
coron3 differences are no longer present on github actions regtest run: |
This PR attempts to reduce the differences for coron3 results when run on different systems.
The changes are:
float32
tofloat64
and back fromfloat64
tofloat32
in image registrationleastsq
fit in image registrationI ran a (limited) jenkins run:
https://plwishmaster.stsci.edu:8081/blue/organizations/jenkins/RT%2FJWST-Developers-Pull-Requests/detail/JWST-Developers-Pull-Requests/1666/pipeline
and (limited) github actions run:
https://github.com/spacetelescope/RegressionTests/actions/runs/10511902948/attempts/1
comparisons of the results using the above tolerances pass. However, the regtests will fail comparison against the current truth files. I suggest that we:
I believe these should pass although it is possible that follow-up changes will be needed as it's difficult to fully test this PR without generating new truth files. If it's preferable I could generate new truth files, put them in a different location on artifactory and run the jenkins and github actions jobs using those new truth files to judge the changes in this PR before the merge.
Full regtests runs for:
Ignoring all the
miri_lrs
mtimage
failures and themiri_image
failure (which all look unrelated) the coron3 tests that fail on jenkins also fail on github actions:Pulling down these files and comparing them with tolerances matching the ones in this PR show no differences between jenkins and github actions.
Most differences with the truth files are small (<2%) except for coron3_product[i2d] which shows 46.80% different. Looking at the miri i2d file compared to the truth the differences are mostly small
with the largest differences concentrated in the corner of the image (see the few pixels near the upper left):
which corresponds with extreme values present in both the truth:
and jenkins results (with this PR).
Looking at only the central 50-200 x 50-200 pixels the differences are much smaller
but do show some structure.
Checklist for PR authors (skip items if you don't have permissions or they are not applicable)
CHANGES.rst
within the relevant release sectionHow to run regression tests on a PR