Skip to content

Commit

Permalink
Merge branch 'master' into JP-3566_pixel_replace
Browse files Browse the repository at this point in the history
  • Loading branch information
hbushouse authored Apr 22, 2024
2 parents bd6a22d + d57f633 commit 200eb35
Show file tree
Hide file tree
Showing 28 changed files with 244 additions and 248 deletions.
68 changes: 65 additions & 3 deletions CHANGES.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,61 @@
1.14.1 (unreleased)
===================

ami
---

- Replaced deprecated ``np.mat()`` with ``np.asmatrix()``. [#8415]

assign_wcs
----------

- Change MIRI LRS WCS code to handle the tilted trace via a centroid shift as a function
of pixel row rather than a rotation of the pixel coordinates. The practical impact is
to ensure that iso-lambda is along pixel rows after this change. [#8411]

associations
------------

- Ensure NRS IFU exposures don't make a spec2 association for grating/filter combinations
where the nrs2 detector isn't illuminated. Remove dupes in mkpool. [#8395]

- Match NIRSpec imprint observations to science exposures on mosaic tile location
and dither pointing, ``MOSTILNO`` and ``DITHPTIN``. [#8410]

documentation
-------------

- Added docs for the NIRSpec MSA metadata file to the data products area of RTD.
[#8399]

extract_1d
----------

- Added a hook to bypass the ``extract_1d`` step for NIRISS SOSS data in
the F277W filter with warning. [#8275]

- Replaced deprecated ``np.trapz`` with ``np.trapezoid()``. [#8415]

flat_field
----------

- Update the flatfield code for NIRSpec IFU data to ensure that SCI=ERR=NaN and
DQ has the DO_NOT_USE flag set outside the footprint of the IFU slices [#8385]

general
-------

- Removed deprecated stdatamodels model types ``DrizProductModel``,
``MIRIRampModel``, and ``MultiProductModel``. [#8388]

outlier_detection
-----------------

- Add association id to ``outlier_i2d`` intermediate filenames. [#8418]

- Pass the ``weight_type`` parameter to all resampling function calls so that
the default weighting can be overridden by the input step parameter. [#8290]

pipeline
--------

Expand All @@ -20,11 +69,24 @@ ramp_fitting
to use uint16 instead of uint8, in order to avoid potential
overflow/wraparound problems. [#8377]

resample
--------

- Remove sleep in median combination added in 8305 as it did not address
the issue in operation [#8419]

residual_fringe
---------------

- Use DQ plane to exclude pixels marked as DO_NOT_USE in correction. [#8381]

tweakreg
--------

- Output source catalog file now respects ``output_dir`` parameter. [#8386]

- Improved how a image group name is determined. [#8426]


1.14.0 (2024-03-29)
===================
Expand All @@ -40,7 +102,7 @@ ami
- Additional optional input arguments for greater user processing flexibility.
See documentation for details. [#7862]

- Bad pixel correction applied to data using new NRM reference file to calculate
- Bad pixel correction applied to data using new NRM reference file to calculate
complex visibility support (M. Ireland method implemented by J. Kammerer). [#7862]

- Make ``AmiAnalyze`` and ``AmiNormalize`` output conform to the OIFITS standard. [#7862]
Expand Down Expand Up @@ -75,7 +137,7 @@ charge_migration
as DO_NOT_USE. This group, and all subsequent groups, are then flagged as
CHARGELOSS and DO_NOT_USE. The four nearest pixel neighbor are then flagged
in the same group. [#8336]

- Added warning handler for expected NaN and inf clipping in the
``sigma_clip`` function. [#8320]

Expand Down Expand Up @@ -361,7 +423,7 @@ tweakreg

- Fixed a bug that caused failures instead of warnings when no GAIA sources
were found within the bounding box of the input image. [#8334]

- Suppress AstropyUserWarnings regarding NaNs in the input data. [#8320]

wfs_combine
Expand Down
4 changes: 3 additions & 1 deletion docs/jwst/outlier_detection/outlier_detection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,9 @@ Specifically, this routine performs the following operations:
should be used when resampling to create the output mosaic. Any pixel with a
DQ value not included in this value (or list of values) will be ignored when
resampling.
* Resampled images will be written out to disk as `_outlier_i2d.fits` by default.
* Resampled images will be written out to disk with the suffix ``_<asn_id>_outlier_i2d.fits``
if the input model container has an <asn_id>, otherwise the suffix will be ``_outlier_i2d.fits``
by default.
* **If resampling is turned off** through the use of the ``resample_data`` parameter,
a copy of the unrectified input images (as a ModelContainer)
will be used for subsequent processing.
Expand Down
6 changes: 3 additions & 3 deletions jwst/ami/leastsqnrm.py
Original file line number Diff line number Diff line change
Expand Up @@ -615,7 +615,7 @@ def matrix_operations(img, model, flux=None, linfit=False, dqm=None):
from linearfit import linearfit

# dependent variables
M = np.mat(flatimg)
M = np.asmatrix(flatimg)

# photon noise
noise = np.sqrt(np.abs(flatimg))
Expand All @@ -625,9 +625,9 @@ def matrix_operations(img, model, flux=None, linfit=False, dqm=None):

# uniform weight
wy = weights
S = np.mat(np.diag(wy))
S = np.asmatrix(np.diag(wy))
# matrix of independent variables
C = np.mat(flatmodeltransp)
C = np.asmatrix(flatmodeltransp)

# initialize object
result = linearfit.LinearFit(M, S, C)
Expand Down
103 changes: 46 additions & 57 deletions jwst/assign_wcs/miri.py
Original file line number Diff line number Diff line change
Expand Up @@ -289,57 +289,29 @@ def lrs_distortion(input_model, reference_files):

# Now deal with the fact that the spectral trace isn't perfectly up and down along detector.
# This information is contained in the xcenter/ycenter values in the CDP table, but we'll handle it
# as a simple rotation using a linear fit to this relation provided by the CDP.

z = np.polyfit(xcen, ycen, 1)
slope = 1. / z[0]
traceangle = np.arctan(slope) * 180. / np.pi # trace angle in degrees
rot = models.Rotation2D(traceangle) # Rotation model

# Now include this rotation in our overall transform
# First shift to a frame relative to the trace zeropoint, then apply the rotation
# to correct for the curved trace. End in a rotated frame relative to zero at the reference point
# and where yrot is aligned with the spectral trace)
xysubtoxyrot = models.Shift(-zero_point[0]) & models.Shift(-zero_point[1]) | rot

# Next shift back to the subarray frame, and then map to v2v3
xyrottov2v3 = models.Shift(zero_point[0]) & models.Shift(zero_point[1]) | det_to_v2v3

# The two models together
xysubtov2v3 = xysubtoxyrot | xyrottov2v3

# Work out the spectral component of the transform
# First compute the reference trace in the rotated-Y frame
xcenrot, ycenrot = rot(xcen, ycen)
# The input table of wavelengths isn't perfect, and the delta-wavelength
# steps show some unphysical behaviour
# Therefore fit with a spline for the ycenrot->wavelength transform
# Reverse vectors so that yinv is increasing (needed for spline fitting function)
yrev = ycenrot[::-1]
wrev = wavetab[::-1]
# as a simple x shift using a linear fit to this relation provided by the CDP.
# First convert the values in CDP table to subarray x/y
xcen_subarray = xcen + zero_point[0]
ycen_subarray = ycen + zero_point[1]

# Fit for X shift as a function of Y
# Spline fit with enforced smoothness
spl = UnivariateSpline(yrev, wrev, s=0.002)
# Evaluate the fit at the rotated-y reference points
wavereference = spl(yrev)
# wavereference now contains the wavelengths corresponding to regularly-sampled ycenrot, create the model
wavemodel = models.Tabular1D(lookup_table=wavereference, points=yrev, name='waveref',
spl = UnivariateSpline(ycen_subarray[::-1], xcen_subarray[::-1] - zero_point[0], s=0.002)
# Evaluate the fit at the y reference points
xshiftref = spl(ycen_subarray)
# This function will give slit dX as a function of Y subarray pixel value
dxmodel = models.Tabular1D(lookup_table=xshiftref, points=ycen_subarray, name='xshiftref',
bounds_error=False, fill_value=np.nan)

# Now construct the inverse spectral transform.
# First we need to create a spline going from wavereference -> ycenrot
spl2 = UnivariateSpline(wavereference[::-1], ycenrot, s=0.002)
# Make a uniform grid of wavelength points from min to max, sampled according
# to the minimum delta in the input table
dw = np.amin(np.absolute(np.diff(wavereference)))
wmin = np.amin(wavereference)
wmax = np.amax(wavereference)
wgrid = np.arange(wmin, wmax, dw)
# Evaluate the rotated y locations of the grid
ygrid = spl2(wgrid)
# ygrid now contains the rotated y pixel locations corresponding to
# regularly-sampled wavelengths, create the model
wavemodel.inverse = models.Tabular1D(lookup_table=ygrid, points=wgrid, name='waverefinv',
bounds_error=False, fill_value=np.nan)
# Fit for the wavelength as a function of Y
# Reverse the vectors so that yinv is increasing (needed for spline fitting function)
# Spline fit with enforced smoothness
spl = UnivariateSpline(ycen_subarray[::-1], wavetab[::-1], s=0.002)
# Evaluate the fit at the y reference points
wavereference = spl(ycen_subarray)
# This model will now give the wavelength corresponding to a given Y subarray pixel value
wavemodel = models.Tabular1D(lookup_table=wavereference, points=ycen_subarray, name='waveref',
bounds_error=False, fill_value=np.nan)

# Wavelength barycentric correction
try:
Expand All @@ -352,24 +324,41 @@ def lrs_distortion(input_model, reference_files):
wavemodel = wavemodel | velocity_corr
log.info("Applied Barycentric velocity correction : {}".format(velocity_corr[1].amplitude.value))

# What is the effective slit X as a function of subarray x,y?
xmodel = models.Mapping([0], n_inputs=2) - (models.Mapping([1], n_inputs=2) | dxmodel)
# What is the effective Y as a function of subarray x,y?
ymodel = models.Mapping([1], n_inputs=2)
# What is the effective XY as a function of subarray x,y?
xymodel = models.Mapping((0, 1, 0, 1)) | xmodel & ymodel
# Define a shift by the reference point and immediately back again
# This doesn't do anything effectively, but it stores the reference point for later use in pathloss
reftransform = models.Shift(-zero_point[0]) & models.Shift(-zero_point[1]) | models.Shift(+zero_point[0]) & models.Shift(+zero_point[1])
# Put the transforms together
xytov2v3 = reftransform | xymodel | det_to_v2v3

# Construct the full distortion model (xsub,ysub -> v2,v3,wavelength)
lrs_wav_model = xysubtoxyrot | models.Mapping([1], n_inputs=2) | wavemodel
dettotel = models.Mapping((0, 1, 0, 1)) | xysubtov2v3 & lrs_wav_model
lrs_wav_model = models.Mapping([1], n_inputs=2) | wavemodel
dettotel = models.Mapping((0, 1, 0, 1)) | xytov2v3 & lrs_wav_model

# Construct the inverse distortion model (v2,v3,wavelength -> xsub,ysub)
# Transform to get xrot from v2,v3
v2v3toxrot = subarray_dist.inverse | xysubtoxyrot | models.Mapping([0], n_inputs=2)
# wavemodel.inverse gives yrot from wavelength
# v2,v3,lambda -> xrot,yrot
xform1 = v2v3toxrot & wavemodel.inverse
dettotel.inverse = xform1 | xysubtoxyrot.inverse
# Go from v2,v3 to slit-x
v2v3_to_xdet = det_to_v2v3.inverse | models.Mapping([0], n_inputs=2)
# Go from lambda to real y
lam_to_y = wavemodel.inverse
# Go from slit-x and real y to real-x
backwards = models.Mapping([0], n_inputs=2) + (models.Mapping([1], n_inputs=2) | dxmodel)
# Go from v2,v3,lam to real x
aa = v2v3_to_xdet & lam_to_y | backwards
# Go from v2,v3,lam to real y
bb = models.Mapping([2], n_inputs=3) | lam_to_y
# Go from v2,v3,lam, to real x,y
dettotel.inverse = models.Mapping((0, 1, 2, 0, 1, 2)) | aa & bb

# Bounding box is the subarray bounding box, because we're assuming subarray coordinates passed in
dettotel.bounding_box = bb_sub[::-1]

return dettotel


def ifu(input_model, reference_files):
"""
The MIRI MRS WCS pipeline.
Expand Down
6 changes: 1 addition & 5 deletions jwst/associations/lib/rules_level2_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -774,11 +774,7 @@ def __init__(self):
DMSAttrConstraint(
name='imprint',
sources=['is_imprt']
),
DMSAttrConstraint(
name='mosaic_tile',
sources=['mostilno'],
),
)
],
reprocess_on_match=True,
work_over=ListCategory.EXISTING,
Expand Down
31 changes: 28 additions & 3 deletions jwst/associations/lib/rules_level2b.py
Original file line number Diff line number Diff line change
Expand Up @@ -304,9 +304,29 @@ def __init__(self, *args, **kwargs):
),
Constraint(
[
# Allow either any background, or ensure imprint and science members
# match on mosaic tile number and dither pointing position.
Constraint_Background(),
Constraint_Imprint(),
Constraint_Single_Science(self.has_science, self.get_exposure_type),
Constraint(
[
Constraint(
[
Constraint_Imprint(),
Constraint_Single_Science(self.has_science, self.get_exposure_type),
],
reduce=Constraint.any
),
DMSAttrConstraint(
name='mostilno',
sources=['mostilno']
),
DMSAttrConstraint(
name='dithptin',
sources=['dithptin']
)
],
reduce=Constraint.all
),
],
reduce=Constraint.any
),
Expand All @@ -320,7 +340,12 @@ def __init__(self, *args, **kwargs):
)
],
reduce=Constraint.notany
)
),
SimpleConstraint(
value=True,
test=lambda value, item: nrsifu_valid_detector(item),
force_unique=False
),
])

# Now check and continue initialization.
Expand Down
3 changes: 2 additions & 1 deletion jwst/associations/mkpool.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,9 +106,10 @@ def mkpool(data,

params = params.difference(IGNORE_KEYS)
params = [item.lower() for item in params]
# Make sure there's no duplicates
params = list(set(params))
params.sort()
defaults = {param: 'null' for param in params}

pool = AssociationPool(names=params, dtype=[object] * len(params))

# Set default values for user-settable non-header parameters
Expand Down
2 changes: 1 addition & 1 deletion jwst/associations/tests/test_exposerr.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def test_exposerr():
pool=pool
)
asns = generated.associations
assert len(asns) > 1
assert len(asns) == 1
for asn in asns:
any_degraded = False
for product in asn['products']:
Expand Down
4 changes: 2 additions & 2 deletions jwst/datamodels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@
_jwst_models = ["ModelContainer", "SourceModelContainer"]

# Deprecated modules in stdatamodels
_deprecated_modules = ['drizproduct', 'multiprod', 'schema']
_deprecated_modules = ['schema']

# Deprecated models in stdatamodels
_deprecated_models = ['DrizProductModel', 'MultiProductModel', 'MIRIRampModel']
_deprecated_models = []

# Import all submodules from stdatamodels.jwst.datamodels
for attr in dir(stdatamodels.jwst.datamodels):
Expand Down
6 changes: 2 additions & 4 deletions jwst/extract_1d/extract_1d_step.py
Original file line number Diff line number Diff line change
Expand Up @@ -398,11 +398,9 @@ def process(self, input):
if input_model.meta.instrument.filter == 'CLEAR':
self.log.info('Exposure is through the GR700XD + CLEAR (science).')
soss_filter = 'CLEAR'
elif input_model.meta.instrument.filter == 'F277W':
self.log.info('Exposure is through the GR700XD + F277W (calibration).')
soss_filter = 'F277W'
else:
self.log.error('The SOSS extraction is implemented for the CLEAR or F277W filters only.')
self.log.error('The SOSS extraction is implemented for the CLEAR filter only.'
f'Requested filter is {input_model.meta.instrument.filter}.')
self.log.error('extract_1d will be skipped.')
input_model.meta.cal_step.extract_1d = 'SKIPPED'
return input_model
Expand Down
2 changes: 1 addition & 1 deletion jwst/extract_1d/soss_extract/atoca.py
Original file line number Diff line number Diff line change
Expand Up @@ -1485,7 +1485,7 @@ def bin_to_pixel(self, i_order=0, grid_pix=None, grid_f_k=None, convolved_spectr

# Integrate
integrand = fct_f_k(x_grid) * x_grid
bin_val.append(np.trapz(integrand, x_grid))
bin_val.append(np.trapezoid(integrand, x_grid))

# Convert to array and return with the pixel centers.
return pix_center, np.array(bin_val)
Expand Down
Loading

0 comments on commit 200eb35

Please sign in to comment.