Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for stacked images #392

Merged
merged 14 commits into from
Aug 28, 2024
Merged

Added support for stacked images #392

merged 14 commits into from
Aug 28, 2024

Conversation

cmccully
Copy link
Collaborator

This PR adds support for images that were stacked at site. This observing mode was introduced to limit our exposure time so that the 0.4m cameras can self guide (using the short exposures) but only ship back a single image. The images are assumed to be 32bit integers and are only pixel by pixel added (no other processing is done going from a single frame to a stacked frame).

Nearly all of the processing happen at the readout level. Therefore only the bias frames and noise propagation need to be updated.

@cmccully cmccully requested review from mgdaily, drhaz and jchate6 August 16, 2024 17:51
@cmccully
Copy link
Collaborator Author

This turned out to open a huge infrastructure can of worms so that is included here.

@@ -65,7 +65,7 @@ def __init__(self, runtime_context):
def do_stage(self, image):
bias_level = stats.sigma_clipped_mean(image.data, 3.5, mask=image.mask)
image -= bias_level
image.meta['BIASLVL'] = bias_level, 'Bias level that was removed after overscan'
image.meta['BIASLVL'] = bias_level / image.n_sub_exposures, 'Bias level that was removed after overscan'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here is is divided by the image.n_sub_exposure
IN line 34, the amount subtracted is multiplied with n_sub_exposure.
Is this correct?

Copy link
Contributor

@mgdaily mgdaily left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly looks good, just one question. Also we might want to write some unit tests with some simple synthetic data to make sure the math works out as we want when dealing with multiple exposures.

@@ -31,9 +31,9 @@ def calibration_type(self):
return 'bias'

def apply_master_calibration(self, image, master_calibration_image):
image -= master_calibration_image.bias_level
image -= master_calibration_image.bias_level * image.n_sub_exposures
image.meta['BIASLVL'] = master_calibration_image.bias_level, 'Bias level that was removed after overscan'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the BIASLVL header be changed to reflect the multiplicative factor of n_sub_exposures?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, regardless of n_sub_exposures why are we subtracting both bias.level and bias.image, where in line 19, amke_master_calibration, master_iamge is the apparently not level-subtracted?

Copy link

@jchate6 jchate6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a couple things.

@property
def n_sub_exposures(self):
n_exposures = self.meta.get('NSUBREAD', 1)
if str(n_exposures).lower() in ['n/a', 'unknown', 'none', '']:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the full list of possible NSUBREAD values can be, but should this have a test for values that cannot be converted to Int?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea to put a float in like 1.5 - now what? Shoudl be a reject for invalid value

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, if there is something pathological in the data like nsubread=1.5, we will crash out and raise an error which I think is the right behavior. The cases here are the ones we assume we are ok and can safely treat things as 1 subread.

@@ -159,6 +159,9 @@ def run_check_if_stacked_calibrations_are_in_db(raw_file_pattern, calibration_ty


def observation_portal_side_effect(*args, **kwargs):
# To produce the mock observation portal resonse, we need to modify the response from
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"portal response"

Copy link
Contributor

@drhaz drhaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some issues marked where I am not 100% sure the code is correct - happy to discuss or proven wrong.

@@ -31,9 +31,9 @@ def calibration_type(self):
return 'bias'

def apply_master_calibration(self, image, master_calibration_image):
image -= master_calibration_image.bias_level
image -= master_calibration_image.bias_level * image.n_sub_exposures
image.meta['BIASLVL'] = master_calibration_image.bias_level, 'Bias level that was removed after overscan'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, regardless of n_sub_exposures why are we subtracting both bias.level and bias.image, where in line 19, amke_master_calibration, master_iamge is the apparently not level-subtracted?

banzai/data.py Outdated
@@ -132,7 +132,7 @@ def __init__(self, data: Union[np.array, Table], meta: fits.Header,
mask: np.array = None, name: str = '', uncertainty: np.array = None, memmap=True):
super().__init__(data=data, meta=meta, mask=mask, name=name, memmap=memmap)
if uncertainty is None:
uncertainty = self.read_noise * np.ones(data.shape, dtype=data.dtype) / self.gain
uncertainty = self.read_noise * self.n_sub_exposures * np.ones(data.shape, dtype=data.dtype) / self.gain
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that we are assuming that site software, after co-adding images, is not messing with the readnoise keyword. Thinking more about this - this is a possibly messy interdependence between data acquistion and processing that needs future coordination between those two processes.

If I read code correctly:

  • if there is no per pixel noise mask, we init the per pixel noise map with teh read noise.
  • when e,g, c = a+b, then error (c) = sqrt (error(a) **2 _ error (b)**2). I.e., should we not multiply with sqrt (self.n_sub_exposures) ?
  • Then also, if per pixel noise image exists, shoudl that not be also multiplied here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#392 (comment) @drhaz Yes this indeed messy. This is due to legacy cameras that did not have an overscan region. For old kb cameras we estimate the mean bias level using the whole image. For cameras that have them we include the overscan. The original thinking was if there is an overscan, then bias_level is going to be very close to zero so just don't worry about it. Things get messier thinking about stacked images though so I can think about that a little more.

Copy link
Contributor

@drhaz drhaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

science logic seem right. infrastructure I cannot comment on.

@cmccully cmccully merged commit 96f113b into main Aug 28, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants