You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using swarp as part of the the gleam-x processing pipeline to mosaic a large collection of observations together. Each individual image is 8000x8000 pixels covers a sizeable fraction of the observable sky. We apply a inverse weighting scheme will coadding to ensure that we reach an optimal sensitivity in the final mosaic i.e. we are using the weighted co-add mode. The weight maps are ultimately a realisation of the MWA primary beam and can have a small value towards the edge of some images. This becomes an incredibly large number when we compute the 1/(RMS*PB)**2 weights that we provide to swarp.
It looks like that in the weighted co-add mode that an entire pixel in the final co-added image is flagged as invalid if there happens to be a pixel in a single image out of the entire set of to-be-co-added images that has a weight value that is above some threshold. The exact line is:
In my little tweaked version I have bumped up BIG to 1e+35 and the flagged pixels in our co-added images are as expected.
Curious to know if there is any magic to the original 1e+30 value of BIG and if there are any negative consequences to a simple change, such as overflows and the like. I don't see anything myself but I am not horribly familiar with the code base.
The text was updated successfully, but these errors were encountered:
Thank you for the heads up. This BIG value is actually a lazy proxy for "infinite". I guess it should work as long as BIG does not exceed 1e+38. Clearly this needs to be refactored to be more reliable with very large dynamic ranges.
We are using swarp as part of the the gleam-x processing pipeline to mosaic a large collection of observations together. Each individual image is 8000x8000 pixels covers a sizeable fraction of the observable sky. We apply a inverse weighting scheme will coadding to ensure that we reach an optimal sensitivity in the final mosaic i.e. we are using the weighted co-add mode. The weight maps are ultimately a realisation of the MWA primary beam and can have a small value towards the edge of some images. This becomes an incredibly large number when we compute the
1/(RMS*PB)**2
weights that we provide to swarp.It looks like that in the weighted co-add mode that an entire pixel in the final co-added image is flagged as invalid if there happens to be a pixel in a single image out of the entire set of to-be-co-added images that has a weight value that is above some threshold. The exact line is:
swarp/src/coadd.c
Line 1295 in 5c927e8
Following the chain to find where
coadd_wthresh
is defined I believe it is ultimately set toBIG
infitscat_def.h
:swarp/src/fits/fitscat_defs.h
Line 56 in 5c927e8
In my little tweaked version I have bumped up
BIG
to1e+35
and the flagged pixels in our co-added images are as expected.Curious to know if there is any magic to the original
1e+30
value ofBIG
and if there are any negative consequences to a simple change, such as overflows and the like. I don't see anything myself but I am not horribly familiar with the code base.The text was updated successfully, but these errors were encountered: