\n",
- "jupyter notebook\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Imports\n",
- "- *numpy* to handle array functions\n",
- "- *astropy.io fits* for accessing FITS files\n",
- "- *matplotlib.pyplot* for plotting data\n",
- "- *zipfile* for accessing zip files\n",
- "- *urllib.request* to access URL\n",
- "- *yaml* to create yaml files\n",
- "- *mirage* to simulate JWST data\n",
- "- *IPython.display Image* to display png files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "import glob\n",
- "import io\n",
- "import os\n",
- "import sys\n",
- "\n",
- "import numpy as np\n",
- "from astropy.io import fits\n",
- "import yaml\n",
- "import zipfile\n",
- "import urllib.request\n",
- "from IPython.display import Image\n",
- "\n",
- "from mirage import imaging_simulator\n",
- "from mirage.yaml import yaml_generator\n",
- "\n",
- "%matplotlib inline\n",
- "import matplotlib.pyplot as plt"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "\n",
- "\n",
- "**Note:** DO NOT UPGRADE PYSIAF AS INSTRUCTED BY THE ABOVE WARNING. \n",
- "Pysiaf 0.10.0 uses PRD release PRDOPSSOC-031. APT 2020.4.1 that was used to create the xml and pointing files used in this notebook uses PRDOPSSOC-030. The mismatch bewtween these two PRD versions causes incorrect placement of PSF. The version of Mirage used in this notebook comes with pysiaf 0.10.0. It was downgraded to 0.9.0 using the file requirements.txt to resolve the placement issue. Using Pysiaf 0.10.0 will cause incorrect placement of the PSF.\n",
- "\n",
- "
"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Mirage is accompanied by a set of reference files that are used to construct simulated data. They are specified by MIRAGE_DATA environment variable. These files include dark current ramps, cosmic ray and PSF libraries."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": [
- "remove-cell"
- ]
- },
- "source": [
- "*Developer Note:*\n",
- "If you are outside STScI install the mirage data by following instructions on https://mirage-data-simulator.readthedocs.io/en/latest/reference_files.html and create MIRAGE_DATA location."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "if os.environ.get('MIRAGE_DATA', None) is None:\n",
- " os.environ['MIRAGE_DATA'] = '/path/to/mirage_data/'\n",
- "print(os.environ.get('MIRAGE_DATA'))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "### Loading input files and reference files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "boxlink = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/niriss_ami_binary/niriss_ami_binary1.zip'\n",
- "boxfile = './niriss_ami_binary1.zip'\n",
- "\n",
- "# Download zip file\n",
- "if not os.path.exists(boxfile):\n",
- " print(\"Downloading input files needed to run the notebook. This may take some time\")\n",
- " urllib.request.urlretrieve(boxlink, boxfile)\n",
- "\n",
- " zf = zipfile.ZipFile(boxfile, 'r')\n",
- " zf.extractall()\n",
- "else:\n",
- " print(\"Input files exist. You may want to check if there is an updated version.\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Create output directory to store simulated data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "odir = './mirage_sim_data'\n",
- "if not os.path.exists(odir):\n",
- " os.mkdir(odir)\n",
- "simdata_output_directory = odir"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Generating input yaml files\n",
- "\n",
- "Begin working on the simulation starting with the APT file. The xml and pointings files must be exported from APT by using 'File ---> Export' option in APT. These files are then used as input to the yaml_generator, that generates yaml input files for each exposure.\n",
- "\n",
- "We are including the xml and pointing files along with other file included in the zip file.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Write down the name of the APT file in comments for quick reference and also store it in the same folder as xml and \n",
- "# pointing files for easy access and to remember which APT file was used for simulations.\n",
- "# APT file used niriss_ami_binary_2022.25coords.aptx \n",
- "xml_name = './mirage_input_files/niriss_ami_binary_2022.25coords.xml'\n",
- "pointing_name = './mirage_input_files/niriss_ami_binary_2022.25coords.pointing'"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We will generate NIRISS AMI simulations of a binary point source using the catalogue \"stars_field19_20_combined_allfilters_new.list\". This catalogue contains the coordinates and magnitudes of AB Dor and HD37093 along with other fainter sources in the field.\n",
- "\n",
- "***\n",
- "NOTE: Mirage currently does not apply the proper motion that is entered in the APT fie. It is therefore important to enter coordinates at the epoch of observation in the APT file. AB Dor is a high proper motion star so we are using 2022.25 coordinates in the APT file and the input source list file..\n",
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The first line in this file is for AB-Dor primary star and the second line is the faint companion that we are trying to detect. Several lines below is the reference star or calibrator star HD37093.\n",
- "```\n",
- "# \n",
- "# vegamag\n",
- "# \n",
- "# \n",
- "x_or_RA y_or_Dec niriss_f090w_magnitude niriss_f115w_magnitude niriss_f140m_magnitude niriss_f150w_magnitude niriss_f158m_magnitude niriss_f200w_magnitude niriss_f277w_magnitude niriss_f356w_magnitude niriss_f380m_magnitude niriss_f430m_magnitude niriss_f444w_magnitude niriss_f480m_magnitude\n",
- " 82.18740518 -65.44767541 5.88850 5.49770 5.07560 4.95800 4.83650 4.72940 4.72220 4.61000 4.61000 4.61000 4.61000 4.61000\n",
- " 82.18717120 -65.44764863 12.06621 10.73581 10.47740 10.15193 9.84442 9.76398 9.75940 8.99121 8.69367 8.76689 8.81310 8.81310\n",
- " 82.18534722 -65.44612065 8.88930 8.38010 7.86780 7.70440 7.54010 7.38740 7.38650 7.29090 7.28470 7.33890 7.38820 7.49960\n",
- " 82.19501500 -65.44532800 12.13360 12.89690 12.47030 12.40400 12.28660 13.19490 14.01950 14.60890 14.67080 15.09080 15.20730 15.56880\n",
- " 82.19343600 -65.45299500 13.54130 13.63920 13.68400 13.69520 13.70530 13.74310 13.76640 13.78850 13.78900 13.79520 13.79330 13.79180\n",
- " \n",
- "...more sources...\n",
- "\n",
- "# \n",
- "# vegamag\n",
- "# \n",
- "# \n",
- "#x_or_RA y_or_Dec niriss_f090w_magnitude niriss_f115w_magnitude niriss_f140m_magnitude niriss_f150w_magnitude niriss_f158m_magnitude niriss_f200w_magnitude niriss_f277w_magnitude niriss_f356w_magnitude niriss_f380m_magnitude niriss_f430m_magnitude niriss_f444w_magnitude niriss_f480m_magnitude\n",
- " \t\n",
- " 82.78450088 -65.12833088 7.33570 6.74320 6.18410 5.99130 5.80110 5.61850 5.62170 5.49700 5.49700 5.53100 5.53100 5.53100\n",
- "\n",
- "```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "catalogues = {'AB-DOR': {'point_source': './mirage_input_files/stars_field19_20_combined_allfilters_new.list'\n",
- " },\n",
- " 'HD-37093': {'point_source': './mirage_input_files/stars_field19_20_combined_allfilters_new.list'\n",
- " }\n",
- " }"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Set the telescope roll angle PAV3 for each observation. Another way to get PAV3 is from the APT file Reports."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "obs1 = 1\n",
- "pav3_obs1 = yaml_generator.default_obs_v3pa_on_date(pointing_name, obs1, date='2022-04-01')\n",
- "obs2 = 2\n",
- "pav3_obs2 = yaml_generator.default_obs_v3pa_on_date(pointing_name, obs2, date='2022-04-01')\n",
- "\n",
- "roll_angle = {'001': pav3_obs1, '002': pav3_obs2}\n",
- "\n",
- "dates = '2022-04-01'\n",
- "reffile_defaults = 'crds'\n",
- "datatype = 'raw'\n",
- "print(\"PAV3 for observation 1\", pav3_obs1, \"PAV3 for observation 2\", pav3_obs2)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Run the yaml generator "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This will create two yaml files that will be used as inputs when creating the simulated data. There will be one yaml file for each exposure in an observation. In our case we have one F480M exposure per observation.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Delete yaml files created in an earlier run\n",
- "old_yaml_files = glob.glob(os.path.join(odir, 'jw*.yaml'))\n",
- "for oldf in old_yaml_files:\n",
- " os.remove(oldf)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "yam = yaml_generator.SimInput(input_xml=xml_name, pointing_file=pointing_name,\n",
- " catalogs=catalogues, roll_angle=roll_angle,\n",
- " dates=dates, reffile_defaults=reffile_defaults,\n",
- " verbose=True, output_dir=odir,\n",
- " simdata_output_dir=simdata_output_directory,\n",
- " datatype=datatype)\n",
- "\n",
- "yam.create_inputs()\n",
- "print(\"Created yaml files\")\n",
- "\n",
- "# Create yaml files for all observations.\n",
- "yaml_files = sorted(glob.glob(os.path.join(odir, 'jw*.yaml')))\n",
- "\n",
- "print(yaml_files)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Update the contents of yaml files and generate raw data"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Updating the yaml files is not always required. We are doing it here to generate data without bad pixels and to make a few other modifications."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "for file in yaml_files:\n",
- "\n",
- " # set astrometric reference file to None to use pysiaf\n",
- " # To create data without bad pixels use the following reference files.\n",
- " with open(file, 'r') as infile:\n",
- " yaml_content = yaml.safe_load(infile)\n",
- " yaml_content['Reffiles']['astrometric'] = 'None'\n",
- " yaml_content['simSignals']['psf_wing_threshold_file'] = 'config'\n",
- " yaml_content['Reffiles']['linearized_darkfile'] = 'None'\n",
- " yaml_content['simSignals']['psfpath'] = './ref_files_non_default/niriss_gridded_psf_library_newmask'\n",
- " yaml_content['Reffiles']['gain'] = './ref_files_non_default/jwst_niriss_gain_general.fits'\n",
- " yaml_content['Reffiles']['pixelflat'] = './ref_files_non_default/jwst_niriss_flat_general.fits'\n",
- " yaml_content['Reffiles']['superbias'] = './ref_files_non_default/jwst_niriss_superbias_sim.fits'\n",
- "\n",
- " if \"jw01093001\" in file:\n",
- " yaml_content['Reffiles']['dark'] = './ref_files_non_default/simdarks/dark000001/dark000001_uncal.fits'\n",
- " elif \"jw01093002\" in file:\n",
- " yaml_content['Reffiles']['dark'] = './ref_files_non_default/simdarks/dark000005/dark000005_uncal.fits'\n",
- "\n",
- " modified_file = file.replace('.yaml', '_mod.yaml')\n",
- " with io.open(modified_file, 'w') as outfile:\n",
- " yaml.dump(yaml_content, outfile, default_flow_style=False)\n",
- "\n",
- " print(\"Updated yaml files. The jw*_mod.yaml files will be used to create data\")\n",
- "\n",
- " # create data\n",
- " t1 = imaging_simulator.ImgSim()\n",
- " t1.paramfile = str(modified_file)\n",
- " t1.create()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Useful output products generated by Mirage\n",
- "\n",
- "```\n",
- "- jw01093001001_01101_00001_nis_uncal_pointsources.list and \n",
- " jw01093002001_01101_00001_nis_uncal_pointsources.list. The first line of \n",
- " jw01093001001_01101_00001_nis_uncal_pointsources.list shows coordinates of AB Dor, the\n",
- " pixel coordinates at which the data is simulated, magnitude and total count rate.\n",
- " \n",
- "- jw01093001001_01101_00001_nis_uncal_F480M_NRM_final_seed_image.fits and \n",
- " jw01093002001_01101_00001_nis_uncal_F480M_NRM_final_seed_image.fits\n",
- " A seed image is a noiseless image that contains signal only from simulated \n",
- " astronomical sources. It can be used for quality checks on the final output data.\n",
- "\n",
- "- jw01093001001_01101_00001_nis_uncal.fits and jw01093002001_01101_00001_nis_uncal.fits\n",
- " AB Dor and HD37093 raw data\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Examine the seed images"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data_seed = []\n",
- "data_seed = []\n",
- "seed_images = sorted(glob.glob('mirage_sim_data/jw*final_seed_image.fits'))\n",
- "for i, df in enumerate(seed_images):\n",
- " seed_im = fits.open(df)\n",
- " seed_im.info()\n",
- " im = seed_im[1].data\n",
- " print(im.shape)\n",
- " data_seed.append(im)\n",
- "print(data_seed[0].shape, data_seed[1].shape)\n",
- "\n",
- "#Temporarily adding the two lines below. \n",
- "#Plots are rendered only when cell 1 is run twice.\n",
- "import matplotlib\n",
- "%matplotlib inline\n",
- "\n",
- "f = plt.figure(figsize=(12, 6))\n",
- "plt.subplot(1, 2, 1)\n",
- "plt.title(\"AB-Dor seed image\")\n",
- "plt.imshow(data_seed[0], origin='lower')\n",
- "plt.draw()\n",
- "plt.subplot(1, 2, 2)\n",
- "plt.title(\"HD37093 seed image\")\n",
- "plt.imshow(data_seed[1], origin='lower')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def find_location_of_peak(image):\n",
- " \"\"\" Find the location of PSF peak when bad pixels are not present\"\"\"\n",
- " peak_location = np.where(image == image.max())\n",
- " y_peak = peak_location[0][0]\n",
- " x_peak = peak_location[1][0]\n",
- " print(\"x_peak_python,y_peak_python\", x_peak, y_peak)\n",
- " y_peak_ds9 = y_peak + 1\n",
- " x_peak_ds9 = x_peak + 1\n",
- " print(\"x_peak_ds9,y_peak_ds9\", x_peak_ds9, y_peak_ds9)\n",
- " return x_peak, y_peak\n",
- "\n",
- "POS1 = find_location_of_peak(data_seed[0])\n",
- "print(POS1)\n",
- "\n",
- "if POS1 != (45, 40):\n",
- " print('****************WARNING: PSF placement is not correct****************')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Compare peak pixel count rate for AB-Dor seed image with peak count rate in equivalent JWST ETC calculation\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "print(\"Pixel count rate for AB-Dor simulated image\", data_seed[0].max(), \"ADU/s\")\n",
- "print(\"Pixel count rate for AB-Dor simulated image\", data_seed[0].max() * 1.61, \"electrons/s\")\n",
- "# Upload screenshot from the ETC workbook.\n",
- "from IPython.display import Image\n",
- "Image(\"AB_Dor_ngroup5_nint65_F480M_jwetc_calc.png\", width=500, height=500)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Comparison of the peak pixel count rate for AB-Dor simulated data with an equivalent JWST ETC calculation shows that the ETC peak pixel count rate of 71388.43 electrons/sec closely matches Mirage simulation. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Examine output point source list file 'jw01093001001_01101_00001_nis_uncal_pointsources.list'\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "```\n",
- "# Field center (degrees): 82.18741002 -65.44767991 y axis rotation angle (degrees): 108.118419 image size: 0080 0080\n",
- "#\n",
- "# Index RA_(hh:mm:ss) DEC_(dd:mm:ss) RA_degrees DEC_degrees pixel_x pixel_y magnitude counts/sec counts/frame TSO_lightcurve_catalog\n",
- "1 05:28:44.9772 -65:26:51.6315 82.18740518 -65.44767541 45.198 39.816 4.610 1.797615e+07 1.356121e+06 None\n",
- "2 05:28:44.9211 -65:26:51.5351 82.18717120 -65.44764863 44.842 34.305 8.813 3.745042e+05 2.825260e+04 None\n",
- "3 05:28:44.4833 -65:26:46.0343 82.18534722 -65.44612065 110.711 -31.926 7.500 1.255616e+06 9.472364e+04 None\n",
- "5 05:28:46.4246 -65:27:10.7820 82.19343600 -65.45299500 -186.533 263.381 13.792 3.819238e+03 2.881233e+02 None\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "It would be worth noting the expected magnitude contrast in F480M is 4.2 magnitudes, and the pixel offset of the faint companion is -0.356, -5.511 pixels in x and y based on the output point source list."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Compare total count rate in the seed image with total count rate in the output pointsource list file.\n",
- "```\n",
- "counts/sec in jw01093001001_01101_00001_nis_uncal_pointsources.list \n",
- "is 1.797615e+07 ADU/sec\n",
- "\n",
- "Throughput of NRM is ~0.155\n",
- "counts/sec with non redundant mask (NRM) = 1.797615e+07 ADU/sec * 0.155 \n",
- " = 2786303.25 ADU/sec\n",
- " \n",
- "The total count rate in the seed image is 2662913.1485110847 ADU/sec. The seed image is 80x80 pixels and \n",
- "therefore SUB80 observations are subject to aperture losses that cause this discrepancy.\n",
- "```\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Examine the raw data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "datafiles = sorted(glob.glob('mirage_sim_data/jw*uncal.fits'))\n",
- "print(datafiles)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data = []\n",
- "for i, df in enumerate(datafiles):\n",
- " file = fits.open(df)\n",
- " file.info()\n",
- " im = file[1].data\n",
- " print(im[0].shape)\n",
- " data.append(im[0])\n",
- "print(data[0].shape, data[1].shape)\n",
- "f = plt.figure(figsize=(12, 6))\n",
- "plt.subplot(1, 2, 1)\n",
- "plt.title(\"AB-Dor\")\n",
- "plt.imshow(data[0][4], origin='lower')\n",
- "plt.subplot(1, 2, 2)\n",
- "plt.title(\"HD37093\")\n",
- "plt.imshow(data[1][11], origin='lower')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Calibrate raw data (_uncal.fits files) with the JWST pipiline"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Use 2_niriss_ami_binary.ipynb to calibrate the data with JWST pipeline. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Aditional Resources\n",
- "\n",
- "- [NIRISS AMI JDox](https://jwst-docs.stsci.edu/near-infrared-imager-and-slitless-spectrograph/niriss-observing-modes/niriss-aperture-masking-interferometry)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## About this notebook\n",
- "\n",
- "**Author:** Deepashri Thatte, Kevin Volk\n",
- "**Updated On:** 2020-12-18"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#top)\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/niriss_ami_binary/2_niriss_ami_binary.ipynb b/notebooks/niriss_ami_binary/2_niriss_ami_binary.ipynb
deleted file mode 100644
index 4b7d91dc8..000000000
--- a/notebooks/niriss_ami_binary/2_niriss_ami_binary.ipynb
+++ /dev/null
@@ -1,844 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": []
- },
- "source": [
- "# NIRISS AMI: Pipeline"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Use case:** Run pipeline and stand-alone tool ImPlaneIA on NIRISS AMI data.
\n",
- "**Data:** JWST data from commissioning.
\n",
- "**Tools:** jwst, astropy.
\n",
- "**Cross-intrument:**
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
\n",
- "\n",
- "**Latest update**: December 2022"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Introduction\n",
- "This notebook runs JWST pipeline on Aperture Masking Interferometry(AMI) data of binary point source AB Dor and calibrator HD37093 observed during AMI commissioning. We are only using two NRM + F480M exposures at dither position POS1, one for the target and one for the calibrator.\n",
- "\n",
- "Steps:\n",
- "\n",
- "[1] Run Detector1 pipeline on all _uncal.fits files to create _rate.fits and _rateints.fits files.\n",
- "\n",
- "[2] Run Image2 pipeline on all _rate.fits files to create _cal.fits and on _rateints.fits files to\n",
- " create _calints.fits files.\n",
- "\n",
- " \n",
- "[3] Run ImPlaneIA ([Greenbaum, A. et al. 2015](https://ui.adsabs.harvard.edu/abs/2015ApJ...798...68G/abstract)) to extract observables in oifits format. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Imports\n",
- "Describe the libraries we're using here. If there's something unusual, explain what the library is, and why we need it.\n",
- "- *numpy* to handle array functions\n",
- "- *astropy.io fits* for accessing FITS files\n",
- "- *matplotlib.pyplot* for plotting data\n",
- "- *zipfile* for accessing zip file\n",
- "- *urllib.request* to access URL\n",
- "- *jwst.pipeline Detector1Pipeline, Image2Pipeline* for calibrating raw data\n",
- "- ImplaneIA to extract interferometric obssrvables from calibrated data"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This notebook requires a series of reference data to run. These must be set at the very beginning of the notebook, before importing the relative packages. In some cases, these locations are already provided in shall configuration files. If this is not the case, they can be set here. Follow the instruction to download the required data:\n",
- "- PYSYN_CDBS: https://pysynphot.readthedocs.io/en/latest/#installation-and-setup\n",
- "- WEBBPSF_PATH: https://webbpsf.readthedocs.io/en/latest/installation.html#installing-the-required-data-files\n",
- "- CRDS_PATH and CRDS_SERVER_URL: https://hst-crds.stsci.edu/static/users_guide/environment.html"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Modify the path to a directory on your machine\n",
- "import os\n",
- "# os.environ[\"CRDS_PATH\"] = \"\"\n",
- "os.environ[\"CRDS_SERVER_URL\"] = \"https://jwst-crds.stsci.edu\"\n",
- "\n",
- "# WEBBPSF and STSYNPHOT\n",
- "# os.environ['WEBBPSF_PATH'] = \"\"\n",
- "# os.environ['PYSYN_CDBS'] = \"\""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "# flake8-ignore: E402\n",
- "# imports\n",
- "%matplotlib inline\n",
- "import glob\n",
- "import os\n",
- "from pathlib import Path\n",
- "\n",
- "import numpy as np\n",
- "from astropy.io import fits\n",
- "import matplotlib.pyplot as plt\n",
- "import zipfile\n",
- "import urllib.request\n",
- "\n",
- "from jwst.pipeline import Detector1Pipeline, Image2Pipeline\n",
- "\n",
- "from nrm_analysis.misctools import utils\n",
- "from nrm_analysis import nrm_core, InstrumentData\n",
- "from nrm_analysis.misctools.implane2oifits import calibrate_oifits\n",
- "\n",
- "from run_bp_fix import correct_fitsfiles"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Loading data\n",
- "Download the data of the AMI commissioning activity: \n",
- "Ab Dor: jw01093012001_03102_00001_nis_uncal.fits \n",
- "HD37093: jw01093015001_03102_00001_nis_uncal.fits"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "# Download the data of the AMI commissioning activity\n",
- "boxlink = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/niriss_ami_binary/niriss_ami_binary2_inflight.zip'\n",
- "boxfile = Path('./niriss_ami_binary2_inflight.zip')\n",
- "\n",
- "# Download zip file\n",
- "if not os.path.exists(boxfile):\n",
- " urllib.request.urlretrieve(boxlink, boxfile)\n",
- "\n",
- " zf = zipfile.ZipFile(boxfile, 'r')\n",
- " zf.extractall()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Define directory that has commissioning data. NRM + F480M exposures of AB Dor, HD 37093 at POS 1\n",
- "currentdir = Path('.')\n",
- "inflightdata = currentdir / 'niriss_ami_binary2_inflight'\n",
- "datafiles = list(sorted(inflightdata.glob('jw*uncal.fits')))\n",
- "for i in datafiles:\n",
- " print(i)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Examine the input raw files"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Look at the last group of the first integration of the uncal.fits file"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data = []\n",
- "for i, df in enumerate(datafiles):\n",
- " file = fits.open(df)\n",
- " file.info()\n",
- " im = file[1].data\n",
- " print(im[0].shape)\n",
- " header = file[0].header\n",
- " print(header['TARGPROP'])\n",
- " data.append(im[0])\n",
- "print(data[0].shape, data[1].shape)\n",
- "f = plt.figure(figsize=(12, 6))\n",
- "plt.suptitle(\"NRM + F480M raw exposures (last group of integration 1) at POS1\", fontsize=18, fontweight='bold')\n",
- "# display the last group of the first integration of each file\n",
- "plt.subplot(1, 2, 1)\n",
- "plt.title(\"AB Dor\")\n",
- "plt.imshow(data[0][4], origin='lower')\n",
- "plt.subplot(1, 2, 2)\n",
- "plt.title(\"HD37093\")\n",
- "plt.imshow(data[1][11], origin='lower')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Define output directory"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Define output directory to save pipeline output products.\n",
- "outdir = Path('./pipeline_calibrated_data/')\n",
- "if not os.path.exists(outdir):\n",
- " os.mkdir(outdir)\n",
- " print(\"Created\", outdir)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Run Detector1 and Image2 pipelines"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Run Detector1, Image 2 pipelines\n",
- "for df in datafiles:\n",
- " result1 = Detector1Pipeline()\n",
- " # Example code to override reference files\n",
- " # superbiasfile = refdir + 'jwst_niriss_superbias_sim.fits'\n",
- " # darkfile = refdir + 'jwst_niriss_dark_sub80_sim.fits'\n",
- " # result1.superbias.override_superbias = superbiasfile\n",
- " # result1.dark_current.override_dark = darkfile\n",
- " result1.ipc.skip = True\n",
- " result1.save_results = True\n",
- " result1.save_calibrated_ramp = True\n",
- " result1.output_dir = str(outdir)\n",
- " result1.run(str(df))\n",
- "\n",
- " df_rate = outdir / df.name.replace('uncal', 'rate')\n",
- " result2 = Image2Pipeline()\n",
- " # Example code to override reference files\n",
- " # flatfieldfile = refdir + \"jwst_niriss_flat_general.fits\"\n",
- " # result2.flat_field.override_flat = flatfieldfile\n",
- " result2.photom.skip = True\n",
- " result2.resample.skip = True\n",
- " result2.save_results = True\n",
- " result2.output_dir = str(outdir)\n",
- " result2.run(str(df_rate))\n",
- "\n",
- " df_rateints = outdir / df.name.replace('uncal', 'rateints')\n",
- " result3 = Image2Pipeline()\n",
- " # Example code to override reference files\n",
- " # result3.flat_field.override_flat = flatfieldfile\n",
- " result3.photom.skip = True\n",
- " result3.resample.skip = True\n",
- " result3.save_results = True\n",
- " result3.output_dir = str(outdir)\n",
- " result3.run(str(df_rateints))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Examine the output files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "rampfiles = sorted(glob.glob(str(outdir / 'jw*ramp.fits')))\n",
- "print(\"\\n\".join(rampfiles))\n",
- "ratefiles = sorted(glob.glob(str(outdir / 'jw*rate.fits')))\n",
- "print(\"\\n\".join(ratefiles))\n",
- "rateintsfiles = sorted(glob.glob(str(outdir / 'jw*rateints.fits')))\n",
- "print(\"\\n\".join(rateintsfiles))\n",
- "calfiles = sorted(glob.glob(str(outdir / 'jw*cal.fits')))\n",
- "print(\"\\n\".join(calfiles))\n",
- "calintsfiles = sorted(glob.glob(str(outdir / 'jw*calints.fits')))\n",
- "print(\"\\n\".join(calintsfiles))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### rate and rateints files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "for i, rateintf in enumerate(rateintsfiles):\n",
- " fits.info(rateintf)\n",
- "for i, ratef in enumerate(ratefiles):\n",
- " fits.info(ratef)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### cal and calints files "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "for i, calintf in enumerate(calintsfiles):\n",
- " fits.info(calintf)\n",
- "for i, calf in enumerate(calfiles):\n",
- " fits.info(calf)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Display calibrated data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data = []\n",
- "for df in calintsfiles:\n",
- " print(df)\n",
- " im = fits.getdata(df, ext=1) \n",
- " print(im.shape)\n",
- " data.append(im)\n",
- "# print(data[0].shape, data[1].shape)\n",
- "f = plt.figure(figsize=(12, 6))\n",
- "plt.suptitle(\"NRM + F480M calibrated exposures (first integration) at POS1\", fontweight='bold', fontsize=20)\n",
- "# Look at the first integration from each calints.fits file\n",
- "plt.subplot(1, 2, 1)\n",
- "plt.title(\"AB Dor\")\n",
- "plt.imshow(data[0][0], clim=(-6000, 40000), origin='lower')\n",
- "plt.subplot(1, 2, 2)\n",
- "plt.title(\"HD37093\")\n",
- "plt.imshow(data[1][0], clim=(-6000, 40000), origin='lower')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Fix bad pixels"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "datasuperdir = Path('./pipeline_calibrated_data_corr/')\n",
- "\n",
- "correct_fitsfiles(indir=outdir,\n",
- " odir=datasuperdir)\n",
- "\n",
- "calintfiles_corr = sorted(glob.glob(str(datasuperdir / '*calints.fits')))\n",
- "print(\"\\n\".join(calintfiles_corr))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Display calibrated data after fixing bad pixels"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data = []\n",
- "for df in calintfiles_corr:\n",
- " file = fits.open(df)\n",
- " im = file[1].data\n",
- " print(im.shape)\n",
- " data.append(im[0])\n",
- "# print(data[0].shape, data[1].shape)\n",
- "f = plt.figure(figsize=(12, 6))\n",
- "# plt.tight_layout()\n",
- "plt.subplot(1, 2, 1)\n",
- "plt.suptitle(\"NRM + F480M calibrated exposures at POS1 (after fixing bad pixels)\", fontweight='bold', fontsize=20)\n",
- "plt.title(\"AB Dor\")\n",
- "plt.imshow(data[0], clim=(-6000, 40000), origin='lower')\n",
- "plt.subplot(1, 2, 2)\n",
- "plt.title(\"HD 37093\")\n",
- "plt.imshow(data[1], clim=(-6000, 40000), origin='lower')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Run ImPlaneIA to reduce calibrated images to fringe observables"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Define functions"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "np.set_printoptions(precision=4, linewidth=160)\n",
- "\n",
- "\n",
- "def examine_observables(ff, trim=36):\n",
- " \"\"\" input: FringeFitter instance after fringes are fit \"\"\"\n",
- " \n",
- " print(\"\\nExamine_observables, standard deviations & variances of *independent* CP's and CAs:\")\n",
- " print(\" Closure phase mean {:+.4f} std dev {:.2e} var {:.2e}\".format(ff.nrm.redundant_cps.mean(),\n",
- " np.sqrt(utils.cp_var(ff.nrm.N, ff.nrm.redundant_cps)), utils.cp_var(ff.nrm.N, ff.nrm.redundant_cps)))\n",
- "\n",
- " print(\" Closure amp mean {:+.4f} std dev {:.2e} var {:.2e}\".format(ff.nrm.redundant_cas.mean(),\n",
- " np.sqrt(utils.cp_var(ff.nrm.N, ff.nrm.redundant_cas)), utils.cp_var(ff.nrm.N, ff.nrm.redundant_cas)))\n",
- "\n",
- " print(\" Fringe amp mean {:+.4f} std dev {:.2e} var {:.2e}\".format(ff.nrm.fringeamp.mean(),\n",
- " ff.nrm.fringeamp.std(), \n",
- " ff.nrm.fringeamp.var()))\n",
- "\n",
- " np.set_printoptions(precision=3, formatter={'float': lambda x: '{:+.1e}'.format(x)}, linewidth=80)\n",
- " print(\" Normalized residuals central 6 pixels\")\n",
- " tlo, thi = (ff.nrm.residual.shape[0]//2 - 3, ff.nrm.residual.shape[0]//2 + 3)\n",
- " print((ff.nrm.residual/ff.datapeak)[tlo:thi, tlo:thi])\n",
- " print(\" Normalized residuals max and min: {:.2e}, {:.2e}\".format(ff.nrm.residual.max() / ff.datapeak,\n",
- " ff.nrm.residual.min() / ff.datapeak))\n",
- " utils.default_printoptions()\n",
- "\n",
- "\n",
- "def raw_observables(fitsfn=None, fitsimdir=None, oitdir=None, oifdir=None, affine2d=None, \n",
- " psf_offset_find_rotation=(0.0, 0.0),\n",
- " psf_offset_ff=None,\n",
- " rotsearch_d=None,\n",
- " set_pistons=None,\n",
- " oversample=3,\n",
- " mnem='',\n",
- " firstfew=None,\n",
- " usebp=False,\n",
- " verbose=False):\n",
- " \"\"\"\n",
- " Reduce calibrated data to fringe observables\n",
- "\n",
- " returns: affine2d (measured or input),\n",
- " psf_offset_find_rotation (input),\n",
- " psf_offset_ff (input or found),\n",
- " fringe pistons/r (found)\n",
- " \"\"\"\n",
- "\n",
- " if verbose:\n",
- " print(\"raw_observables: input\", fitsimdir / fitsfn)\n",
- " if verbose:\n",
- " print(\"raw_observables: oversample\", oversample)\n",
- "\n",
- " fobj = fits.open(fitsimdir / fitsfn)\n",
- "\n",
- " if verbose:\n",
- " print(fobj[0].header['FILTER'])\n",
- " \n",
- " niriss = InstrumentData.NIRISS(fobj[0].header['FILTER'],\n",
- " usebp=usebp,\n",
- " firstfew=firstfew, # read_data truncation to only read first few slices...\n",
- " )\n",
- "\n",
- " ff = nrm_core.FringeFitter(niriss, \n",
- " oitdir=str(oitdir), # write OI text files here, and diagnostic images if desired\n",
- " oifdir=str(oifdir), # write OI fits files here\n",
- " oversample=oversample,\n",
- " interactive=False,\n",
- " save_txt_only=False)\n",
- "\n",
- " ff.fit_fringes(str(fitsimdir / fitsfn))\n",
- " examine_observables(ff)\n",
- "\n",
- " np.set_printoptions(formatter={'float': lambda x: '{:+.2e}'.format(x)}, linewidth=80)\n",
- " if verbose:\n",
- " print(\"raw_observables: fringepistons/rad\", ff.nrm.fringepistons)\n",
- " utils.default_printoptions()\n",
- " return affine2d, psf_offset_find_rotation, ff.nrm.psf_offset, ff.nrm.fringepistons\n",
- "\n",
- "\n",
- "def main(fitsimdir=None, oitdir=None, oifdir=None, ifn=None, oversample=3, mnem='', firstfew=None, verbose=False, usebp=True):\n",
- " \"\"\"\n",
- " fitsimdir: string: dir containing data file\n",
- " ifn: str inout file name\n",
- "\n",
- " \"\"\"\n",
- "\n",
- " np.set_printoptions(formatter={'float': lambda x: '{:+.2e}'.format(x)}, linewidth=80)\n",
- " if verbose:\n",
- " print(\"main: \", ifn)\n",
- " if verbose:\n",
- " print(\"main: fitsimdir\", fitsimdir)\n",
- " \n",
- " aff, psf_offset_r, psf_offset_ff, fringepistons = raw_observables(fitsfn=ifn, \n",
- " fitsimdir=fitsimdir, \n",
- " oitdir=oitdir,\n",
- " oifdir=oifdir,\n",
- " oversample=oversample,\n",
- " firstfew=firstfew,\n",
- " usebp=usebp,\n",
- " verbose=verbose)\n",
- " print('aff', aff, 'psf_offset_r', psf_offset_r, 'psf_offset_ff', psf_offset_ff, 'fringepistons', fringepistons)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Run ImPlaneIA"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "mirdatafiles = ['jw01093012001_03102_00001_nis_calints.fits',\n",
- " 'jw01093015001_03102_00001_nis_calints.fits']\n",
- "\n",
- "# Choose FIRSTFEW = None to analyze all integrations\n",
- "FIRSTFEW = 5\n",
- "OVERSAMPLE = 7\n",
- "print('FIRSTFEW', FIRSTFEW, 'OVERSAMPLE', OVERSAMPLE)\n",
- "\n",
- "\n",
- "COUNT = 0\n",
- "for fnmir in mirdatafiles:\n",
- " print('\\nAnalyzing\\n ', COUNT, fnmir.replace('.fits', ''), end=' ')\n",
- " hdr = fits.getheader(datasuperdir / fnmir)\n",
- " print(hdr['FILTER'], end=' ')\n",
- " print(hdr['TARGNAME'], end=' ')\n",
- " print(hdr['TARGPROP'])\n",
- " # next line for convenient use in oifits writer which looks up target online\n",
- " catname = hdr['TARGPROP'].replace('-', '') # for target lookup on-line, otherwise UNKNOWN used\n",
- " fits.setval(datasuperdir / fnmir, 'TARGNAME', value=catname)\n",
- " fits.setval(datasuperdir / fnmir, 'TARGPROP', value=catname)\n",
- " \n",
- " usebp = False\n",
- " \n",
- " main(fitsimdir=datasuperdir,\n",
- " oitdir=datasuperdir / 'Saveoit',\n",
- " oifdir=datasuperdir / 'Saveoif',\n",
- " ifn=fnmir, \n",
- " oversample=OVERSAMPLE, \n",
- " mnem='',\n",
- " firstfew=FIRSTFEW,\n",
- " usebp=usebp,\n",
- " verbose=True) # verbose only has driver-function scope\n",
- " COUNT += 1"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Examine the output products"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Analytical model is created and interferometric observables are calculated for each integration of the data. The output products are stored in a folder that has rootname of the file, jw01093012001_03102_00001_nis_calints for AB Dor and jw01093015001_03102_00001_nis_calints for HD37093. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# integration 0 (1st integration)\n",
- "results_int0 = glob.glob(str(datasuperdir / 'Saveoit' / \"jw01093012001_03102_00001_nis_calints\" / \"*00*\"))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "results_int0"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Information about observables calculated from the 1st integration\n",
- "\n",
- "\n",
- "```\n",
- "- phases_00.txt: 35 fringe phases\n",
- "- amplitudes_00.txt: 21 fringe amplitudes\n",
- "- CPs_00.txt: 35 closure phases\n",
- "- CAs_00.txt: 35 closure amplitudes\n",
- "- fringepistons_00.txt: 7 pistons (optical path delays between mask holes)\n",
- "- solutions_00.txt: 44 fringe coefficients of terms in the analytical model\n",
- "- modelsolution_00.fits: analytical model\n",
- "- n_modelsolution_00.fits: normalized analytical model\n",
- "- residual_00.fits: data - model\n",
- "- n_residual_00.fits: normalized residual\n",
- "\n",
- "```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "cropped_data = fits.getdata(datasuperdir / 'Saveoit' / \"jw01093012001_03102_00001_nis_calints\" / \"centered_0.fits\")\n",
- "model = fits.getdata(datasuperdir / 'Saveoit' / \"jw01093012001_03102_00001_nis_calints\" / \"modelsolution_00.fits\")\n",
- "residual = fits.getdata(datasuperdir / 'Saveoit' / \"jw01093012001_03102_00001_nis_calints\" / \"residual_00.fits\")\n",
- "n_residual = fits.getdata(datasuperdir / 'Saveoit' / \"jw01093012001_03102_00001_nis_calints\" / \"n_residual_00.fits\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "f = plt.figure(figsize=(12, 3))\n",
- "plt.subplot(1, 3, 1)\n",
- "plt.title(\"AB Dor cropped data\", fontsize=12)\n",
- "plt.imshow(cropped_data, origin='lower')\n",
- "plt.subplot(1, 3, 2)\n",
- "plt.title(\"AB Dor analytical model\", fontsize=12)\n",
- "plt.imshow(model, origin='lower')\n",
- "plt.subplot(1, 3, 3)\n",
- "plt.title(\"AB Dor residual (data - model)\", fontsize=12)\n",
- "plt.imshow(residual, origin='lower')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.title(\"AB Dor normalized residual\")\n",
- "plt.imshow(n_residual, clim=(-0.03, 0.03), origin='lower')\n",
- "plt.colorbar()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": []
- },
- "source": [
- "### OIFITS files for the target and calibrator"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "OIFITS is the standard data exchange format for Optical Interferometry. It is based on the Flexible Image Transport System (FITS). OIFITS files include data tables for storing interferometric observables, including squared visibilities and closure phases. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "oifiles = sorted(glob.glob(str(datasuperdir / 'Saveoif' / \"*oifits\")))\n",
- "oifiles"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Calibrate the closure phases and fringe amplitudes of target with the closure phases and fringe amplitudes of the calibrator."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This step is necessary to remove instrumental contribution to closure phases and fringe amplitudes."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Define the target and calibrator OIFITS files\n",
- "\n",
- "targ_oifits = (datasuperdir / 'Saveoif' / 'jw01093012001_03102_00001_nis.oifits')\n",
- "cal_oifits = (datasuperdir / 'Saveoif' / 'jw01093015001_03102_00001_nis.oifits')\n",
- "\n",
- "# Produce a single calibrated OIFITS file\n",
- "\n",
- "print(\"************ Running calibrate ***************\")\n",
- "print(\"Calibrating AB Dor with HD37093\")\n",
- "calibrate_oifits(targ_oifits, cal_oifits, oifdir=str(datasuperdir / 'Saveoif'))\n",
- "\n",
- "\n",
- "print(\"The output of calibrate is calibrated oifits file that will be used as an input to 3_niriss_ami_binary.ipynb.\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": [
- "remove-cell"
- ]
- },
- "source": [
- "*Developer Note:*\n",
- "The observable extraction performed in this notebook used only the first 5 integrations to save time while demonstrating the use of ImPlaneIA to reduce pipeline-calibrated observations. For accurate science use, we use all the integrations contained in the input data files. Therefore the input data for notebook 3 (3_niriss_ami_binary.ipynb) is slightly different from the output of 2_niriss_ami_binary.ipynb."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Aditional Resources\n",
- "\n",
- "- [JWST NIRISS AMI](https://jwst-docs.stsci.edu/near-infrared-imager-and-slitless-spectrograph/niriss-observing-modes/niriss-aperture-masking-interferometry)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## About this notebook\n",
- "\n",
- "**Author:** Deepashri Thatte, Anand Sivaramakrishnan, Rachel Cooper, Jens Kammerer \n",
- "**Updated On:** 2022-09-16 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#top)\n",
- " "
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/niriss_ami_binary/3_niriss_ami_binary.ipynb b/notebooks/niriss_ami_binary/3_niriss_ami_binary.ipynb
deleted file mode 100644
index e8efbb0d3..000000000
--- a/notebooks/niriss_ami_binary/3_niriss_ami_binary.ipynb
+++ /dev/null
@@ -1,759 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": "slide"
- },
- "tags": []
- },
- "source": [
- "# NIRISS AMI: Binary fitting of AB Dor using Fouriever"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Introduction\n",
- "This notebook takes a calibrated OIFITS file (see _Defining Terms_ below) from NIRISS Aperture Masking Interferometry (AMI) data of binary point source AB Dor and calibrator HD 37093. The data were observed during science instrument commissioning on June 5, 2022, calibrated with the [JWST pipeline](https://jwst-pipeline.readthedocs.io/en/latest/index.html), and had interferometric observables extracted using the Image Plane approach to Interferometric Analysis ([ImPlaneIA](http://ascl.net/1808.004)). This notebook estimates the parameters of a best-fit binary model for this data.\n",
- "\n",
- "We use the [fouriever](https://github.com/kammerje/fouriever) analysis package to extract binary point source parameters. It utilizes model-fitting and chi-squared minimization, taking into account correclations between interferometric observables and bandwidth smearing effects at long baselines, to find the most probable location of a stellar companion. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "### Defining terms\n",
- "\n",
- "**Observables** here refers to interferometric observables: quantities that can be measured from an interferogram. We use squared visibilities and closure phases from the fringe amplitudes and fringe phases of the observed scene.\n",
- "\n",
- "**[OIFITS](https://doi.org/10.1051/0004-6361/201526405)** files are the Optical Interferometry standard FITS files used by the community. A science target's observables are **calibrated** by the observables of a PSF calibrator star's image, to remove telescope/instrument effects (as far as possible).\n",
- "\n",
- "**Fouriever** is a toolkit for analyzing non-redundant masking and kernel phase interferometry data. Details of the analysis methods are described in [Kammerer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019MNRAS.486..639K/abstract), [2020](https://ui.adsabs.harvard.edu/abs/2020A%26A...644A.110K/abstract). \n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "skip"
- }
- },
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Imports\n",
- "\n",
- "- *numpy* to handle array functions\n",
- "- *zipfile* for accessing zip file\n",
- "- *urllib.request* to access URL\n",
- "- *os* and *pathlib* for path manipulation\n",
- "\n",
- "- *astropy.io fits* for accessing FITS files\n",
- "- *matplotlib.pyplot* for plotting data\n",
- "- *scipy.ndimage* for image handling\n",
- "- *IPython.display* for image display\n",
- "- *fouriever* high-contrast companion detection"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "%matplotlib inline\n",
- "\n",
- "import numpy as np\n",
- "import os\n",
- "import zipfile\n",
- "import urllib.request\n",
- "from pathlib import Path\n",
- "\n",
- "from astropy.io import fits\n",
- "import matplotlib.pyplot as plt\n",
- "from scipy import ndimage\n",
- "from IPython.display import Image\n",
- "from fouriever import intercorr, uvfit"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "\n",
- "\n",
- "The provided data is a NIRISS AMI observation of a binary star: a 4.61 $M_L$ K0V primary (AB Dor A) with a faint companion. On the date these data were taken, we expect to observe the following characteristics of the binary system in filter F480M: \n",
- "- Magnitude difference: dm = 4.2 mag at 4.8 μm (a flux ratio of 48)\n",
- "- Sub-λ/D separation: sep = 370.98 mas\n",
- "- Position angle: theta = 286.98 degrees (from North)\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# These are the binary parameters we expect Fouriever to extract\n",
- "\n",
- "sep = 370.98 # binary separation [mas]\n",
- "theta = 286.98 # position angle (pa) [deg]\n",
- "dm = 4.2 # delta magnitude [mag]"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Loading data\n",
- "Load the input data file. This is the science target's calibrated oifits file, created by running `2_niriss_ami_binary.ipynb`. The zipped file also includes a simulated FITS image with the faint companion artificially brightened so we can examine its position relative to the host star."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Download the products\n",
- "\n",
- "boxlink = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/niriss_ami_binary/niriss_ami_binary3.zip'\n",
- "boxfile = Path('./niriss_ami_binary3.zip')\n",
- "\n",
- "# Download zip file\n",
- "if not boxfile.exists():\n",
- " urllib.request.urlretrieve(boxlink, boxfile)\n",
- " zf = zipfile.ZipFile(boxfile, 'r')\n",
- " zf.extractall()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# The data file is now in the 'niriss_ami_binary3' directory\n",
- "calib_oifits = './niriss_ami_binary3/calib_abdor_f480m_pos1_hd37093.oifits'\n",
- "datadir = os.path.dirname(calib_oifits)\n",
- "print(os.path.abspath(calib_oifits))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## File Information\n",
- "The science target's calibrated oifits file needs to have interferometric observables corresponding to all possible baselines in the non-redundant mask, and their associated uncertainties. The uncertainty on each observable is taken to be the \"straight\" standard deviation (without consideration for linear dependence within a set of variables).\n",
- "OIFITS files are multi-extension FITS binary tables.\n",
- "\n",
- "The file contains seven extensions. The primary data extension is empty, and the remaining six extensions binary tables contain information about the observations and the interferometric data.\n",
- "* **OI_WAVELENGTH**: Bandpass info (e.g. weighted central wavelength) \n",
- "* **OI_TARGET**: Properties of the target retrieved from SIMBAD, observation date and duration\n",
- "* **OI_ARRAY**: Telescope info (e.g. sub-aperture locations, primary mirror diameter) \n",
- "* **OI_VIS**: Fringe visibility amplitudes and phases (calibrated visibility amplitudes of the target, corresponding to 21 baselines)\n",
- "* **OI_VIS2**: Squared visibility amplitudes and phases (calibrated squared visibility of the target, corresponding to 21 baselines)\n",
- "* **OI_T3**: Triple product amplitudes and phases (calibrated closure phases of the target corresponding to 35 triangles between sub-apertures)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display OIFITS file information\n",
- "fits.info(calib_oifits)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The primary header tells us some information about the contents, such as the telescope (JWST), aperture mask design used (g7s6), the name of the target (AB Dor), and the name of the calibrator (HD37093):"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display primary header contents\n",
- "fits.getheader(calib_oifits)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Examine the data\n",
- "\n",
- "First, we will plot the interferometric observables we will be fitting with Fouriever. These are the 35 closure phases and 21 squared visibilities of the AB Dor observation, calibrated by our reference star HD 37093. We will plot observables against $B_{max}/\\lambda$, where $B_{max}$ is the baseline between the centers of two sub-apertures (for squared visibility) or the longest of the three baselines between three sub-apertures (for closure phase) and $\\lambda$ is the central wavelength of the filter (4.82 $\\mu$m)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# Your input data is an oifits file\n",
- "with fits.open(calib_oifits) as hdu:\n",
- " cp_ext = hdu['OI_T3'].data\n",
- " sqvis_ext = hdu['OI_VIS2'].data\n",
- " oiarray = hdu['OI_ARRAY'].data\n",
- " wavel = hdu['OI_WAVELENGTH'].data['EFF_WAVE']\n",
- " pscale = hdu['OI_ARRAY'].header['PSCALE']\n",
- " pav3 = hdu[0].header['PA']\n",
- "print('Wavelength: %.2e m' % wavel)\n",
- "print('V3 PA: %.2f degrees' % pav3)\n",
- "cp = cp_ext['T3PHI']\n",
- "cp_err = cp_ext['T3PHIERR']\n",
- "tri_idx = cp_ext['STA_INDEX']\n",
- "\n",
- "sqvis = sqvis_ext['VIS2DATA']\n",
- "sqvis_err = sqvis_ext['VIS2ERR']\n",
- "bl_idx = sqvis_ext['STA_INDEX']\n",
- "\n",
- "hole_ctrs = oiarray['STAXYZ']\n",
- "hole_idx = oiarray['STA_INDEX']"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Calculate the length of the baseline [m] for each pair\n",
- "baselines = []\n",
- "for bl in bl_idx:\n",
- " hole1, hole2 = (bl[0] - 1), (bl[1] - 1)\n",
- " x1, y1 = hole_ctrs[hole1][0], hole_ctrs[hole1][1]\n",
- " x2, y2 = hole_ctrs[hole2][0], hole_ctrs[hole2][1]\n",
- " length = np.abs(np.sqrt((x2 - x1)**2. + (y2 - y1)**2.))\n",
- " baselines.append(length)\n",
- "# Calculate the length of three baselines for each triangle\n",
- "# Select the longest for plotting\n",
- "tri_baselines = []\n",
- "tri_longest = []\n",
- "for tri in tri_idx:\n",
- " hole1, hole2, hole3 = tri[0] - 1, tri[1] - 1, tri[2] - 1\n",
- " x1, y1 = hole_ctrs[hole1][0], hole_ctrs[hole1][1]\n",
- " x2, y2 = hole_ctrs[hole2][0], hole_ctrs[hole2][1]\n",
- " x3, y3 = hole_ctrs[hole3][0], hole_ctrs[hole3][1]\n",
- " length12 = np.abs(np.sqrt((x2 - x1)**2. + (y2 - y1)**2.))\n",
- " length23 = np.abs(np.sqrt((x3 - x2)**2. + (y3 - y2)**2.))\n",
- " length31 = np.abs(np.sqrt((x1 - x3)**2. + (y1 - y3)**2.))\n",
- " tri_lengths = [length12, length23, length31]\n",
- " tri_baselines.append(tri_lengths)\n",
- " tri_longest.append(np.max(tri_lengths))\n",
- "\n",
- "# Calculate B_max/lambda\n",
- "bmaxlambda_sqvis = baselines / wavel\n",
- "bmaxlambda_cp = tri_longest / wavel\n",
- "\n",
- "# Label baselines and triangles\n",
- "bl_strings = []\n",
- "for idx in bl_idx:\n",
- " bl_strings.append(str(idx[0])+'_'+str(idx[1]))\n",
- "\n",
- "tri_strings = []\n",
- "for idx in tri_idx:\n",
- " tri_strings.append(str(idx[0])+'_'+str(idx[1])+'_'+str(idx[2]))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot closure phases, square visibilities\n",
- "# Label which point corresponds to which hole pair or triple\n",
- "\n",
- "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 7))\n",
- "ax1.errorbar(bmaxlambda_cp, cp, yerr=cp_err, fmt='go')\n",
- "ax1.set_xlabel(r'$B_{max}/\\lambda$', size=16)\n",
- "ax1.set_ylabel('Closure phase [deg]', size=14)\n",
- "ax1.set_title('Calibrated Closure Phase', size=14)\n",
- "for ii, tri in enumerate(tri_strings):\n",
- " ax1.annotate(tri,\n",
- " (bmaxlambda_cp[ii], cp[ii]),\n",
- " xytext=(bmaxlambda_cp[ii]+10000, cp[ii]))\n",
- "\n",
- "ax2.errorbar(bmaxlambda_sqvis, sqvis, yerr=sqvis_err, fmt='go')\n",
- "ax2.set_title('Calibrated Squared Visibility', size=16)\n",
- "ax2.set_xlabel(r'$B_{max}/\\lambda$', size=14)\n",
- "ax2.set_ylabel('Squared visibility amplitude', size=14)\n",
- "for ii, bl in enumerate(bl_strings):\n",
- " ax2.annotate(bl,\n",
- " (bmaxlambda_sqvis[ii],\n",
- " sqvis[ii]),\n",
- " xytext=(bmaxlambda_sqvis[ii]+10000, sqvis[ii]))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The above plots show the calibrated closure phases (left) and the calibrated squared visibilities (right). Each quantity is plotted against $B_{max}/\\lambda$, the baseline length divided by the wavelength of the observation. In the case of closure phases, where the triangle is formed by three baselines, the longest one is selected. \n",
- "\n",
- "For a monochromatic observation of a point source, we would expect all 35 closure phases to be zero, and all 21 squared visibilities to be unity. Asymmetries in the target caused by e.g. an unresolved companion cause the closure phases and visibilities corresponding to the baselines between affected sub-apertures to diverge from zero or unity. We can now use the set of calibrated observables to model the most probable location and contrast ratio of the companion. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Perform the binary parameter extraction\n",
- "\n",
- "Fouriever performs a search for faint companions over a coarse grid of starting points for the position of the companion and companion/star flux ratio. A multiparameter fit is performed for each starting position, and the companion position and flux ratio are adjusted, leading to a local $\\chi^2$ minimum. Based on the distance each starting point must travel to reach the local minima, Fouriever creates a finer search grid and repeats the process. This iterative process eventually identifies a global minimum in the $\\chi^2$ map. Fouriever also accounts for correlations between observables and bandwidth smearing effects.\n",
- "\n",
- "Each point on the grid is fitted with a model of a resolved primary star with a point-source companion. The interferometric observables are estimated using a least-squares minimization between the data and model. The significance ($n\\sigma$) of the detection is capped at $8\\sigma$. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Utilize the Fouriever package"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Set up parameters for the binary search grid and output files\n",
- "\n",
- "rmin = 10 # inner search angle [mas]\n",
- "rmax = 500 # outer search angle [mas]\n",
- "step = 25 # grid step size [mas]\n",
- "\n",
- "obase1 = 'ami_binary_smear_cov.png'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Load data for covariance calculation\n",
- "basename = os.path.basename(calib_oifits)\n",
- "data = intercorr.data(idir=datadir+'/',\n",
- " fitsfiles=[basename])\n",
- "# Add observable covariances\n",
- "data.clear_cov()\n",
- "data.add_cpcov(odir=datadir)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Load data for fitting\n",
- "data = uvfit.data(idir=datadir,\n",
- " fitsfiles=[basename])\n",
- "\n",
- "# Compute chi-squared map.\n",
- "fit = data.chi2map(model='bin', # fit unresolved companion\n",
- " cov=True, # this data set has covariance\n",
- " sep_range=(rmin, rmax), # use custom separation range\n",
- " step_size=step, # use custom step size\n",
- " smear=3, # use bandwidth smearing of 3\n",
- " ofile=obase1) # save figures"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display the chi-squared map\n",
- "\n",
- "Image(obase1[:-4]+'_chi2map.png')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Run MCMC around best fit position.\n",
- "fit = data.mcmc(fit=fit, # best fit from gridsearch\n",
- " temp=None, # use default temperature (reduced chi-squared of best fit)\n",
- " cov=True, # this data set has covariance\n",
- " smear=3, # use bandwidth smearing of 3\n",
- " ofile=obase1, # save figures\n",
- " sampler='emcee') # sampling algorithm"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display MCMC fit results\n",
- "\n",
- "Image(obase1[:-4]+'_mcmc_corner.png')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The corner plot shows the 1D histograms and 2D contours of the posterior distributions of each pair of binary parameters."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Fouriever produces a plot of the closure phases from the best-fit binary model vs those extracted from the data, and the residual (difference between the data and model, normalized by $\\sigma/\\chi$):"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display the closure phase model vs. data plot\n",
- "\n",
- "Image(obase1[:-4]+'_cp_bin.png')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Compute chi-squared map after subtracting best fit companion.\n",
- "obase2 = 'ami_binary_smear_cov_sub.png'\n",
- "\n",
- "fit_sub = data.chi2map_sub(fit_sub=fit, # best fit from MCMC\n",
- " model='bin', # fit unresolved companion\n",
- " cov=True, # this data set has covariance\n",
- " sep_range=(rmin, rmax), # use custom separation range\n",
- " step_size=step, # use custom step size\n",
- " smear=3, # use bandwidth smearing of 3\n",
- " ofile=obase2) # save figures"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display chi-squared map after subtracting the best-fit companion\n",
- "\n",
- "Image(obase2[:-4]+'_chi2map.png')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This \"detection\" after the companion is subtracted has a low $N_{\\sigma}$ and the symmetry in the map suggests Fourier aliasing at that location, so we do not believe there is a second companion detected here."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Convert RA, Dec, flux ratio to separation, PA, magnitude difference"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Fouriever provides the best-fit offset in right ascension and declination \\[mas\\], and the flux ratio of companion to primary. These are stored in the fit dictionary produced by Fouriever. We will convert these to position angle, separation, and contrast in magnitudes to compare with our expected values from above:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def mag_diff(flux_ratio):\n",
- " # for flux ratio f_a/f_b, calculate magnitude difference m_a - m_b\n",
- " return -2.5 * np.log10(flux_ratio)\n",
- "\n",
- "\n",
- "def convert_params(x, y, f, ex, ey, ef, sigma=1):\n",
- " \"\"\" \n",
- " 'convert' binary params from ra, dec, flux ratio to sep,\n",
- " pa, magnitude difference with appropriate errors.\n",
- " Multiply errorbars by some sigma factor.\n",
- " \"\"\"\n",
- " sep = np.sqrt(x**2 + y**2)\n",
- " pa = 360 + np.rad2deg(np.arctan2(x, y))\n",
- " dm = mag_diff(f)\n",
- " sep_unc = np.sqrt((x/sep*ex)**2+(y/sep*ey)**2) * sigma\n",
- " pa_unc = np.rad2deg(np.sqrt((y/sep**2*ex)**2+(-x/sep**2*ey)**2)) * sigma\n",
- " dm_upper = mag_diff(f + ef*sigma)\n",
- " dm_lower = mag_diff(f - ef*sigma)\n",
- " return sep, pa, dm, sep_unc, pa_unc, dm_upper, dm_lower"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Get the best-fit parameters and uncertainties\n",
- "x, ex = fit[\"p\"][1], fit[\"dp\"][1]\n",
- "y, ey = fit[\"p\"][2], fit[\"dp\"][2]\n",
- "f, ef = fit[\"p\"][0], fit[\"dp\"][0]\n",
- "\n",
- "sep_fit, pa_fit, dm_fit, sep_unc, pa_unc, dm_upper, dm_lower = convert_params(x, y, f, ex, ey, ef)\n",
- "dm_unc = np.mean([dm-dm_upper, dm_lower-dm]) # symmetrical uncertainties for printing"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "print(' Expected Model')\n",
- "print('Sep [mas]: %.3f %.3f +/- %.2f' % (sep, sep_fit, sep_unc))\n",
- "print('Theta [deg]: %.3f %.3f +/- %.2f' % (theta, pa_fit, pa_unc))\n",
- "print('dm [mag]: %.3f %.3f +/- %.2f' % (dm, dm_fit, dm_unc))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We notice that there is a significant difference between the expected and retrieved binary parameters, most notably the separation. Since the expected parameters were derived from astrometric measurements of AB Dor, the most recent of which was in 2007 (HIP2), there were large uncertainties on these values. We think it is probable that the observed position is the real position of the companion.\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Calculate the contrast limits"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Next, we will use Fouriever to find the detection limit at different angular separations. To do this, Fouriever subtracts the best-fit companion and then injects an additional companion at each grid position with different flux ratios and estimates the number of sigma for a theoretical detection at that point. It interpolates the flux ratio values at 3$\\sigma$ for all points in the grid to produce a 3$\\sigma$ detection map of the contrast (flux) ratio. \n",
- "\n",
- "It also uses the Absil method, which differs slightly from the injection method in that it assumes that the uniform disk is the true model when evaluating the probability of a binary existing at a given grid position, while the injection method assumes the binary is the true model (see [Absil et al. 2011](https://ui.adsabs.harvard.edu/abs/2011A%26A...535A..68A/abstract), [Gallenne et al. 2015](https://ui.adsabs.harvard.edu/abs/2015A%26A...579A..68G/abstract) for more details)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Estimate detection limits using the injection and Absil methods\n",
- "data.detlim(sigma=3., # confidence level of detection limits\n",
- " fit_sub=fit, # best fit from MCMC\n",
- " cov=True, # this data set has covariance\n",
- " sep_range=(rmin, rmax), # use custom separation range\n",
- " step_size=step, # use custom step size\n",
- " smear=3, # use bandwidth smearing of 3\n",
- " ofile=obase2) # save figures"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Display the detection limit plot\n",
- "\n",
- "Image(obase2[:-4]+'_detlim.png')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The top plots show the detection limit, in terms of contrast ($\\Delta$Mag), at each location in the search grid based on the injection/detection of false companions using two slightly different methods. The lower plot shows an estimate of the same detection limit with respect to the angular separation [mas] from the primary target (azimuthally averaged from the top plots)."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Visually compare the position\n",
- "\n",
- "We can now look at a simulated image with the faint companion artificially brightened, and we see that the position of the primary star at the center and its faint companion appear to match the position of the companion detected on the above $\\chi^2$ map output by Fouriever."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "bright_im = fits.getdata('./niriss_ami_binary3/jw01093001001_01101_00001_nis_uncal_F480M_NRM_final_seed_image_fakemag_4.8131.fits')\n",
- "# center and trim the image around the bright pixel before rotating\n",
- "bright_idx = np.where(bright_im == np.max(bright_im))\n",
- "left, right = bright_idx[1][0] - 30, bright_idx[1][0] + 30\n",
- "down, up = bright_idx[0][0] - 30, bright_idx[0][0] + 30\n",
- "trimmed = bright_im[down:up, left:right]\n",
- "\n",
- "# rotate the image by pav3 to match orientation\n",
- "rot_im = ndimage.rotate(trimmed, pav3+90, reshape=False)\n",
- "# convert image coordinates from pixels to milliarcsec, centered on target\n",
- "xsize_px, ysize_px = rot_im.shape[0], rot_im.shape[1]\n",
- "xsize_mas, ysize_mas = xsize_px*pscale, ysize_px*pscale\n",
- "bright_loc = np.where(rot_im == np.max(rot_im))\n",
- "brightx_mas, brighty_mas = bright_loc[1][0]*pscale, bright_loc[0][0]*pscale\n",
- "xmin, xmax = 0. - brightx_mas, xsize_mas - brightx_mas\n",
- "ymin, ymax = 0. - brighty_mas, ysize_mas - brighty_mas\n",
- "\n",
- "# Plot the image on the same scale as Fouriever chi-squared maps above\n",
- "fig = plt.figure(figsize=(5, 5))\n",
- "plt.imshow(rot_im, origin='lower', extent=[-xmin, -xmax, ymin, ymax])\n",
- "plt.xlim([500, -500])\n",
- "plt.ylim([-500, 500])\n",
- "plt.title('AB Dor and Companion')\n",
- "plt.xlabel(r'$\\Delta\\alpha$ [mas]')\n",
- "plt.ylabel(r'$\\Delta\\delta$ [mas]')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The above plot confirms the position of the faint companion relative to the target, shown here as change in right ascension and declination from the center of the target star."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": []
- },
- "source": [
- "## Additional Resources\n",
- "\n",
- "- [JWST NIRISS AMI documentation](https://jwst-docs.stsci.edu/near-infrared-imager-and-slitless-spectrograph/niriss-observing-modes/niriss-aperture-masking-interferometry)\n",
- "- Fouriever development papers:\n",
- " - [Kammerer et al. 2019](https://ui.adsabs.harvard.edu/abs/2019MNRAS.486..639K/abstract)\n",
- " - [Kammerer et al. 2020](https://ui.adsabs.harvard.edu/abs/2020A%26A...644A.110K/abstract)\n",
- " - [Kammerer et al. 2021a](https://ui.adsabs.harvard.edu/abs/2021A%26A...646A..36K/abstract)\n",
- "- [CANDID paper (Galenne et al. 2015)](https://ui.adsabs.harvard.edu/link_gateway/2015A&A...579A..68G/doi:10.1051/0004-6361/201525917)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## About this notebook\n",
- "**Author:** Rachel Cooper and Anand Sivaramakrishnan's adaptation of example analysis scripts written by Jens Kammerer (STScI) and Anthony Soulain (University of Sydney).\n",
- "**Updated On:** 2022-20-12"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#top)\n",
- " "
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/niriss_ami_binary/environment.sh b/notebooks/niriss_ami_binary/environment.sh
deleted file mode 100644
index 46aa33653..000000000
--- a/notebooks/niriss_ami_binary/environment.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/usr/bin/env bash
-
-export MIRAGE_DATA='/mnt/stsci/mirage_data'
diff --git a/notebooks/niriss_ami_binary/pre-install.sh b/notebooks/niriss_ami_binary/pre-install.sh
deleted file mode 100644
index a91747faa..000000000
--- a/notebooks/niriss_ami_binary/pre-install.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/usr/bin/env bash
-
-source bin/activate
-pip install numpy
-pip install d2to1
-pip install stsci.distutils
-pip install pyfits
diff --git a/notebooks/niriss_ami_binary/requirements.txt b/notebooks/niriss_ami_binary/requirements.txt
deleted file mode 100644
index 187a2da76..000000000
--- a/notebooks/niriss_ami_binary/requirements.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-matplotlib==3.3.2
-healpy==1.12.5
-asdf==2.7.1
-git+https://github.com/spacetelescope/jwst_gtvt.git
-git+https://github.com/spacetelescope/mirage.git@969cc64f883f43011f8bd772482d0df06895201e
-jwst
-pysiaf==0.9.0
-git+https://github.com/anand0xff/ImPlaneIA.git@b45d665cdc6979dabdcde89be2b5a7b0a1995483
-poppy
-webbpsf
-uncertainties
-munch
-astroquery
-termcolor
-astropy
diff --git a/notebooks/niriss_ami_binary/run_bp_fix.py b/notebooks/niriss_ami_binary/run_bp_fix.py
deleted file mode 100644
index f807a754c..000000000
--- a/notebooks/niriss_ami_binary/run_bp_fix.py
+++ /dev/null
@@ -1,454 +0,0 @@
-from __future__ import division
-
-import matplotlib
-
-matplotlib.rcParams.update({'font.size': 14})
-
-# =============================================================================
-# IMPORTS
-# =============================================================================
-
-import astropy.io.fits as pyfits
-import matplotlib.pyplot as plt
-import numpy as np
-
-import argparse
-import glob
-import os
-import time
-
-from copy import deepcopy
-from poppy import matrixDFT
-from scipy.ndimage import median_filter
-
-from jwst.datamodels import dqflags
-
-# =============================================================================
-# CODE FROM ANAND FOLLOWS
-# =============================================================================
-
-micron = 1.0e-6
-filts = ['F277M', 'F380M', 'F430M', 'F480M', 'F356W', 'F444W']
-filtwl_d = { # pivot wavelengths
- 'F277M': 2.776e-6, # less than Nyquist
- 'F380M': 3.828e-6,
- 'F430M': 4.286e-6,
- 'F480M': 4.817e-6,
- 'F356W': 3.595e-6, # semi-forbidden
- 'F444W': 4.435e-6, # semi-forbidden
-}
-filthp_d = { # half power limits
- 'F277M': (2.413e-6, 3.142e-6),
- 'F380M': (3.726e-6, 3.931e-6),
- 'F430M': (4.182e-6, 4.395e-6),
- 'F480M': (4.669e-6, 4.971e-6),
- 'F356W': (3.141e-6, 4.068e-6),
- 'F444W': (3.880e-6, 5.023e-6),
-}
-WL_OVERSIZEFACTOR = 0.1 # increase filter wl support by this amount to 'oversize' in wl space
-
-pix_arcsec = 0.0656 # nominal isotropic pixel scale - refine later
-pix_rad = pix_arcsec * np.pi / (60 * 60 * 180)
-
-DIAM = 6.559348 # / Flat-to-flat distance across pupil in V3 axis
-PUPLDIAM = 6.603464 # / Full pupil file size, incl padding.
-PUPL_CRC = 6.603464 # / Circumscribing diameter for JWST primary
-
-def create_wavelengths(filtername):
- """
- filtername str: filter name
- Extend filter support slightly past half power points.
- Filter transmissions are quasi-rectangular.
- """
- wl_ctr = filtwl_d[filtername]
- wl_hps = filthp_d[filtername]
- # both positive quantities below - left is lower wl, rite is higher wl
- dleft = (wl_ctr - wl_hps[0]) * (1 + WL_OVERSIZEFACTOR)
- drite = (-wl_ctr + wl_hps[1]) * (1 + WL_OVERSIZEFACTOR)
-
- return (wl_ctr, wl_ctr - dleft, wl_ctr + drite)
-
-def calcsupport(filtername, sqfov_npix, pupil="NRM"):
- """
- filtername str: filter name
- calculate psf at low center high wavelengths of filter
- coadd psfs
- perform fft-style transform of image w/dft
- send back absolute value of FT(image) in filter - the CV Vsq array
- """
- wls = create_wavelengths(filtername)
- print(f" {filtername}: {wls[0] / micron:.3f} to {wls[2] / micron:.3f} micron")
-
- detimage = np.zeros((sqfov_npix, sqfov_npix), float)
- for wl in wls:
- psf = calcpsf(wl, sqfov_npix, pupil=pupil)
- detimage += psf
-
- return transform_image(detimage)
-
-def transform_image(image):
- ft = matrixDFT.MatrixFourierTransform()
- ftimage = ft.perform(image, image.shape[0], image.shape[0]) # fake the no-loss fft w/ dft
-
- return np.abs(ftimage)
-
-def calcpsf(wl, fovnpix, pupil="NRM"):
- """
- input wl: float meters wavelength
- input fovnpix: feld of view (square) in number of pixels
- returns monochromatic unnormalized psf
- """
- reselt = wl / PUPLDIAM # radian
- nlamD = fovnpix * pix_rad / reselt # Soummer nlamD FOV in reselts
- # instantiate an mft object:
- ft = matrixDFT.MatrixFourierTransform()
-
- if pupil == "NRM":
- pupil_mask = pyfits.getdata("./niriss_ami_binary2_inflight/MASK_NRM.fits")
- elif pupil == "CLEARP":
- pupil_mask = pyfits.getdata("./niriss_ami_binary2_inflight/MASK_CLEARP.fits")
- else:
- raise ValueError("pupil should be 'NRM' or 'CLEARP'")
-
- image_field = ft.perform(pupil_mask, nlamD, fovnpix)
- image_intensity = (image_field * image_field.conj()).real
-
- return image_intensity
-
-# =============================================================================
-# CODE FROM JENS FOLLOWS
-# =============================================================================
-
-def bad_pixels(data,
- median_size,
- median_tres):
- """
- Identify bad pixels by subtracting median-filtered data and searching for
- outliers.
- """
-
- mfil_data = median_filter(data, size=median_size)
- diff_data = np.abs(data - mfil_data)
- pxdq = diff_data > median_tres * np.median(diff_data)
- pxdq = pxdq.astype('int')
-
- print(' Identified %.0f bad pixels (%.2f%%)' % (np.sum(pxdq), np.sum(pxdq) / np.prod(pxdq.shape) * 100.))
- print(' %.3f' % np.max(diff_data/np.median(diff_data)))
-
- return pxdq
-
-def fourier_corr(data,
- pxdq,
- fmas):
- """
- Compute and apply the bad pixel corrections based on Section 2.5 of
- Ireland 2013. This function is the core of the bad pixel cleaning code.
- """
-
- # Get the dimensions.
- ww = np.where(pxdq > 0.5)
- ww_ft = np.where(fmas)
-
- # Compute the B_Z matrix from Section 2.5 of Ireland 2013. This matrix
- # maps the bad pixels onto their Fourier power in the domain Z, which is
- # the complement of the pupil support.
- B_Z = np.zeros((len(ww[0]), len(ww_ft[0]) * 2))
- xh = data.shape[0] // 2
- yh = data.shape[1] // 2
- xx, yy = np.meshgrid(2. * np.pi * np.arange(yh + 1) / data.shape[1],
- 2. * np.pi * (((np.arange(data.shape[0]) + xh) % data.shape[0]) - xh) / data.shape[0])
- for i in range(len(ww[0])):
- cdft = np.exp(-1j * (ww[0][i] * yy + ww[1][i] * xx))
- B_Z[i, :] = np.append(cdft[ww_ft].real, cdft[ww_ft].imag)
-
- # Compute the corrections for the bad pixels using the Moore-Penrose pseudo
- # inverse of B_Z (Equation 19 of Ireland 2013).
- B_Z_ct = np.transpose(np.conj(B_Z))
- B_Z_mppinv = np.dot(B_Z_ct, np.linalg.inv(np.dot(B_Z, B_Z_ct)))
-
- # Apply the corrections for the bad pixels.
- data_out = deepcopy(data)
- data_out[ww] = 0.
- data_ft = np.fft.rfft2(data_out)[ww_ft]
- corr = -np.real(np.dot(np.append(data_ft.real, data_ft.imag), B_Z_mppinv))
- data_out[ww] += corr
-
- return data_out
-
-def fix_bad_pixels(indir,
- odir,
- fitsfiles,
- show=None,
- save=False):
- """
- """
-
- print('Fixing bad pixels...')
-
-
- # These values were determined empirically for NIRISS/AMI and need to be
- # tweaked for any other instrument.
- median_size = 3 # pix
- median_tres = 50. # JK: changed from 28 to 20 in order to capture all bad pixels
- nrefrow = 5
-
- # Create the output directory if it does not exist already.
- if (not os.path.exists(odir)):
- os.makedirs(odir)
-
- # Go through all FITS files.
- Nfitsfiles = len(fitsfiles)
- for i in range(Nfitsfiles):
- print(' File %.0f of %.0f' % (i + 1, Nfitsfiles))
- print(' %s' % fitsfiles[i])
- # Open the FITS file.
- hdul = pyfits.open(os.path.join(indir, fitsfiles[i]), memmap=False)
- data = hdul['SCI'].data
- pxdq0 = hdul['DQ'].data
- imsz = data.shape
- #print('Im size:', imsz)
- if len(imsz) == 2:
- # add extra dimension as if it's a calints file
- data = np.expand_dims(data, axis=0)
- pxdq0 = np.expand_dims(pxdq0, axis=0)
- imsz = data.shape
- if (not (imsz[-2] == 80 and imsz[-1] == 80)): # last 2 dimensions are the image size
- #raise UserWarning('Expecting 80x80 subarrays')
- # skip file; probably TA exposure
- print('Image dimensions in file #%i not 80x80; skipping' % (i+1))
- continue # proceed to next file
- filt = hdul[0].header['FILTER']
- pupil = hdul[0].header['PUPIL']
- if (filt not in filts):
- raise UserWarning('Filter ' + filt + ' is not supported')
-
- # code from Rachel:
- # only correct pixels marked DO_NOT_USE in the DQ array
- # modified by Jens to also correct JUMP_DET pixels
- totpix = np.prod(imsz)
- REFPIX = dqflags.pixel["REFERENCE_PIXEL"]
- mask_refpix = pxdq0 & REFPIX == REFPIX
- nrefpix = np.count_nonzero(mask_refpix)
-
- nflagged_all = np.count_nonzero(pxdq0) - nrefpix
- DO_NOT_USE = dqflags.pixel["DO_NOT_USE"]
- JUMP_DET = dqflags.pixel["JUMP_DET"]
- dqmask_DO_NOT_USE = pxdq0 & DO_NOT_USE == DO_NOT_USE
- dqmask_JUMP_DET = pxdq0 & JUMP_DET == JUMP_DET
- if (pupil == 'NRM'):
- dqmask = dqmask_DO_NOT_USE | dqmask_JUMP_DET
- else:
- dqmask = dqmask_DO_NOT_USE
- pxdq = np.where(dqmask, pxdq0, 0)
- nflagged_dnu = np.count_nonzero(pxdq) - nrefpix
- print(' The following values do not include the reference pixels:')
- print(' %i pixels flagged in DQ array' % nflagged_all)
- print(' %i pixels flagged DO_NOT_USE or JUMP_DET' % nflagged_dnu)
- print(' %.2f percent of all pixels flagged' % ((nflagged_dnu / totpix) * 100))
- flagged_per_int = [np.count_nonzero(slc) - nrefrow*imsz[1] for slc in dqmask]
- print(' %.2f pixels (average) flagged per integration'% np.mean(flagged_per_int))
-
- # These values are taken from the JDox and the SVO Filter Profile
- # Service.
- diam = PUPLDIAM # m
- gain = 1.61 # e-/ADU
- rdns = 18.32 # e-
- pxsc = pix_arcsec * 1000. # mas/pix
-
- # Find the PSF centers and determine the maximum possible frame size.
- ww_max = []
- for j in range(imsz[0]):
- ww_max += [np.unravel_index(np.argmax(median_filter(data[j], size=3)), data[j].shape)] # JK: added median filter to catch PSF center despite hot pixels
- ww_max = np.array(ww_max)
- xh = min(imsz[1] - np.max(ww_max[:, 0]), np.min(ww_max[:, 0]) - nrefrow) # the bottom 4 rows are reference pixels
- yh = min(imsz[2] - np.max(ww_max[:, 1]), np.min(ww_max[:, 1]) - 0)
- sh = min(xh, yh)
- print(' Cropping all frames to %.0fx%.0f pixels' % (2 * sh, 2 * sh))
-
- # Compute field-of-view and Fourier sampling.
- fov = 2 * sh * pxsc / 1000. # arcsec
- fsam = filtwl_d[filt] / (fov / 3600. / 180. * np.pi) # m/pix
- print(' FOV = %.1f arcsec, Fourier sampling = %.3f m/pix' % (fov, fsam))
-
- #
- cvis = calcsupport(filt, 2 * sh, pupil=pupil)
- cvis /= np.max(cvis)
- fmas = cvis < 1e-3 # 1e-3 seems to be a reasonable threshold
- fmas_show = fmas.copy()
- fmas = np.fft.fftshift(fmas)[:, :2 * sh // 2 + 1]
-
- # Compute the pupil mask. This mask defines the region where we are
- # measuring the noise. It looks like 15 lambda/D distance from the PSF
- # is reasonable.
- ramp = np.arange(2 * sh) - 2 * sh // 2
- xx, yy = np.meshgrid(ramp, ramp)
- dist = np.sqrt(xx ** 2 + yy ** 2)
- if (pupil == 'NRM'):
- pmas = dist > 9. * filtwl_d[filt] / diam * 180. / np.pi * 1000. * 3600. / pxsc
- else:
- pmas = dist > 12. * filtwl_d[filt] / diam * 180. / np.pi * 1000. * 3600. / pxsc
- if (np.sum(pmas) < np.mean(flagged_per_int)):
- print(' SKIPPING: subframe too small to estimate noise')
- continue
-
- # Go through all frames.
- for j in range(imsz[0]):
- print(' Frame %.0f of %.0f' % (j + 1, imsz[0]))
-
- # Now cut out the subframe.
- data_cut = deepcopy(data[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh])
- data_orig = deepcopy(data_cut)
- pxdq_cut = deepcopy(pxdq[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh])
- pxdq_cut = pxdq_cut > 0.5
- pxdq_orig = deepcopy(pxdq_cut)
-
- # Correct the bad pixels. This is an iterative process. After each
- # iteration, we check whether new (residual) bad pixels are
- # identified. If so, we re-compute the corrections. If not, we
- # terminate the iteration.
- for k in range(10):
-
- # plt.figure()
- # plt.imshow(np.log10(data_cut), origin='lower')
- # plt.imshow(pmas, alpha=0.5, origin='lower')
- # plt.colorbar()
- # plt.show()
-
- # plt.figure()
- # plt.imshow(np.log10(np.abs(np.fft.rfft2(np.fft.fftshift(data_cut)))), origin='lower')
- # plt.imshow(fmas, alpha=0.5, origin='lower')
- # plt.colorbar()
- # plt.show()
-
- # Correct the bad pixels.
- data_cut = fourier_corr(data_cut,
- pxdq_cut,
- fmas)
- if (k == 0):
- data_temp = deepcopy(data_cut)
-
- # Identify residual bad pixels by looking at the high spatial
- # frequency part of the image.
- fmas_data = np.real(np.fft.irfft2(np.fft.rfft2(data_cut) * fmas))
-
- # Analytically determine the noise (Poisson noise + read noise)
- # and normalize the high spatial frequency part of the image
- # by it, then identify residual bad pixels.
- mfil_data = median_filter(data_cut, size=median_size)
- nois = np.sqrt(mfil_data / gain + rdns ** 2)
- fmas_data /= nois
- temp = bad_pixels(fmas_data,
- median_size=median_size,
- median_tres=median_tres)
-
- # Check which bad pixels are new. Also, compare the
- # analytically determined noise with the empirically measured
- # noise.
- pxdq_new = np.sum(temp[pxdq_cut < 0.5])
- print(' Iteration %.0f: %.0f new bad pixels, sdev of norm noise = %.3f' % (k + 1, pxdq_new, np.std(fmas_data[pmas])))
-
- # If no new bad pixels were identified, terminate the
- # iteration.
- if (pxdq_new == 0.):
- break
-
- # If new bad pixels were identified, add them to the bad pixel
- # map.
- pxdq_cut = ((pxdq_cut > 0.5) | (temp > 0.5)).astype('int')
-
- # Put the modified subframes back into the data cube.
- data[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh] = fourier_corr(data_orig,
- pxdq_cut,
- fmas)
- pxdq[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh] = pxdq_cut
-
- # Show plot if desired.
- if (show is not None and str(j) in show):
- # if True:
- print('Making plot...')
- # Make plot.
- f, ax = plt.subplots(2, 3, figsize=(3 * 6.4, 2 * 4.8))
- p00 = ax[0, 0].imshow(np.log10(np.abs(data[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh])), origin='lower')
- c00 = plt.colorbar(p00, ax=ax[0, 0])
- c00.set_label('log10(abs(ADU))', labelpad=20., rotation=270.)
- ax[0, 0].imshow(pmas, origin='lower', cmap='binary', alpha=0.25)
- ax[0, 0].set_title('Region to estimate noise')
- p01 = ax[0, 1].imshow(np.log10(np.abs(np.fft.fftshift(np.fft.fft2(np.fft.fftshift(data[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh]))))),
- origin='lower')
- c01 = plt.colorbar(p01, ax=ax[0, 1])
- c01.set_label('log10(abs(fourier power))', labelpad=20., rotation=270.)
- ax[0, 1].imshow(fmas_show, origin='lower', cmap='binary', alpha=0.25)
- ax[0, 1].set_title('Region Z (complement of pupil support)')
- p02 = ax[0, 2].imshow(
- np.log10(np.abs(data[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh])),
- origin='lower')
- c02 = plt.colorbar(p02, ax=ax[0, 2])
- c02.set_label('log10(abs(ADU))', labelpad=20., rotation=270.)
- ax[0, 2].set_title('Data after')
- p10 = ax[1, 0].imshow(pxdq_orig, origin='lower')
- c10 = plt.colorbar(p10, ax=ax[1, 0])
- ax[1, 0].set_title('Bad pixels before')
- fmas_data = np.real(np.fft.irfft2(np.fft.rfft2(data_temp) * fmas))
- mfil_data = median_filter(data_temp, size=median_size)
- nois = np.sqrt(mfil_data / gain + rdns ** 2)
- fmas_data /= nois
- mfil_fmas_data = median_filter(fmas_data, size=median_size)
- diff_fmas_data = np.abs(fmas_data - mfil_fmas_data)
- temp = diff_fmas_data / np.median(diff_fmas_data)
- p11 = ax[1, 1].imshow(temp, origin='lower', vmax=median_tres)
- c11 = plt.colorbar(p11, ax=ax[1, 1])
- ax[1, 1].scatter(*np.where(temp > median_tres)[::-1], s=15, c='red', marker='x')
- ax[1, 1].set_title('Bad pixels new')
- p12 = ax[1, 2].imshow(pxdq[j, ww_max[j, 0] - sh:ww_max[j, 0] + sh, ww_max[j, 1] - sh:ww_max[j, 1] + sh],
- origin='lower')
- c12 = plt.colorbar(p12, ax=ax[1, 2])
- ax[1, 2].set_title('Bad pixels after')
- plt.tight_layout()
- if save:
- outname = os.path.join(odir, fitsfiles[i].replace('.fits','_bpfixed.png'))
- plt.savefig(outname)
- print('Saved figure to %s' % outname)
- else:
- plt.show()
- # import pdb; pdb.set_trace()
-
- # Save the corrected data into the original FITS file.
- hdul['SCI'].data = np.squeeze(data)
- hdul['DQ'].data = np.squeeze(pxdq0) # original dq array
- hdul.writeto(os.path.join(odir, fitsfiles[i]), output_verify='fix', overwrite=True)
- hdul.close()
-
- return None
-
-def correct_fitsfiles(indir, odir, pattern='*calints.fits', show=None, save=False):
- toc = time.time()
- # gather fits files to correct
- fitsfiles = sorted(map(os.path.basename, glob.glob(os.path.join(indir, pattern))))
- # do the correction
- fix_bad_pixels(indir=indir,
- odir=odir,
- fitsfiles=fitsfiles,
- show=show,
- save=save)
- tic = time.time()
- print("RUNTIME: %.2f s" % (tic - toc))
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--indir",
- help="Location of files to be fixed")
- parser.add_argument("--odir",
- help="Location to save bad-pixel-corrected files")
- parser.add_argument("-p", "--pattern", default="*calints.fits", type=str,
- help="Pattern of file names that will be processed in indir")
- parser.add_argument("--show", help="Slices to show in plot",
- nargs='*')
- parser.add_argument("--save", help="Save plots of corrected images",
- action="store_true")
- args = parser.parse_args()
-
- correct_fitsfiles(indir=args.indir,
- odir=args.odir,
- pattern=args.pattern,
- show=args.show,
- save=args.save)
diff --git a/notebooks/optimal_extraction_dynamic/Spectral_Extraction.ipynb b/notebooks/optimal_extraction_dynamic/Spectral_Extraction.ipynb
deleted file mode 100644
index 054d50bba..000000000
--- a/notebooks/optimal_extraction_dynamic/Spectral_Extraction.ipynb
+++ /dev/null
@@ -1,1772 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "# NIRSpec MOS Optimal Spectral Extraction"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "**Use case:** optimal spectral extraction; method by [Horne (1986)](https://ui.adsabs.harvard.edu/abs/1986PASP...98..609H/abstract).
\n",
- "**Data:** JWST simulated NIRSpec MOS data; point sources.
\n",
- "**Tools:** jwst, webbpsf, matplotlib, scipy, custom functions.
\n",
- "**Cross-intrument:** any spectrograph.
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
\n",
- "\n",
- "## Introduction\n",
- "\n",
- "The JWST pipeline produces 1-D and 2-D rectified spectra from combined exposures for each spectroscopic mode. Currently, the 1D products are produced using aperture extraction, with plans to implement optimal extraction via PSF-weighting or fitting. However, there are many situations in which the output will not necessarily be \"optimal\", and fine-tuning the parameters will be needed to improve the results. This notebook is intended to provide a walkthrough of the optimal extraction procedure with example JWST data.\n",
- "\n",
- "### Defining terms\n",
- "__Optimal extraction:__ a method of aperture extraction first defined in [Horne (1986)](https://ui.adsabs.harvard.edu/abs/1986PASP...98..609H/).
\n",
- "__S/N:__ Signal-to-noise ratio, a measure of how noisy a spectrum is.
\n",
- "__WCS:__ World Coordinate System, used for converting between different reference frames.
\n",
- "\n",
- "## Imports\n",
- "We will be using the following libraries to perform optimal spectral extraction.\n",
- "- `glob glob` for collecting filenames\n",
- "- `numpy` to handle array functions, as well as other various and sundry activities\n",
- "- `jwst.datamodels ImageModel, MultiSpecModel` for accessing the datamodels for our example data\n",
- "- `astropy.io fits` for low-level FITS file I/O\n",
- "- `astropy.modeling models, fitting` for the many fitting tasks\n",
- "- `astropy.visualization astropy_mpl_style, simple_norm` for displaying nice images\n",
- "- `scipy.interpolate interp1d, RegularGridInterpolator` for all our interpolation needs\n",
- "- `matplotlib.pyplot` for plotting data\n",
- "- `matplotlib.patches Rectangle` for plotting rectangles on our data\n",
- "- `ipywidgets` to create interactive widgets for adjusting fit parameters\n",
- "- `webbpsf NIRSpec` to generate and visualize a PSF from the instrument model (see Appendix B)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "import tarfile\n",
- "import urllib.request\n",
- "\n",
- "# Set environmental variables\n",
- "os.environ[\"WEBBPSF_PATH\"] = \"./webbpsf-data/webbpsf-data\"\n",
- "\n",
- "# WEBBPSF Data\n",
- "boxlink = 'https://stsci.box.com/shared/static/34o0keicz2iujyilg4uz617va46ks6u9.gz' \n",
- "boxfile = './webbpsf-data/webbpsf-data-1.0.0.tar.gz'\n",
- "\n",
- "webbpsf_folder = './webbpsf-data'\n",
- "\n",
- "# Gather webbpsf files\n",
- "psfExist = os.path.exists(webbpsf_folder)\n",
- "if not psfExist:\n",
- " os.makedirs(webbpsf_folder)\n",
- " urllib.request.urlretrieve(boxlink, boxfile)\n",
- " gzf = tarfile.open(boxfile)\n",
- " gzf.extractall(webbpsf_folder)\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "%matplotlib inline\n",
- "from glob import glob\n",
- "import numpy as np\n",
- "from stdatamodels.jwst.datamodels import ImageModel\n",
- "from stdatamodels.jwst.datamodels import MultiSpecModel\n",
- "from astropy.io import fits\n",
- "from astropy.modeling import models, fitting\n",
- "from astropy.visualization import astropy_mpl_style, simple_norm\n",
- "from specutils import Spectrum1D\n",
- "from scipy.interpolate import interp1d, RegularGridInterpolator\n",
- "import matplotlib.pyplot as plt\n",
- "from matplotlib.patches import Rectangle\n",
- "from ipywidgets import interact\n",
- "import ipywidgets as widgets\n",
- "\n",
- "plt.style.use(astropy_mpl_style) #use the style we imported for matplotlib displays"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## Loading data\n",
- "We will be using simulated level 3 MOS data provided by James Muzerolle. These files come from a simulated visit with many point sources, and we will begin with the products of the `resample` step, which have the file extension `s2d.fits`. We will also compare the results of our optimal extraction with the products of the `extract1d` step, with the `x1d.fits` extension. See [the science data products specification](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/product_types.html#stage-3-data-products) and links therein for details on structure and format of these files."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The optimal extraction procedure laid out below can be repeated for each `'SCI'` extension in each `s2d` file. For the purposes of this notebook, we will assume that the `resample` step has produced optimal output, so those are the only extensions we need to access. (Rectifying and combining the input spectra is a complicated process on its own, and is far beyond the scope of this notebook!)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "# If the example dataset has already been downloaded, comment out these lines:\n",
- "import zipfile\n",
- "import urllib.request\n",
- "boxlink = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/optimal_extraction/optimal_extraction.zip'\n",
- "boxfile = './optimal_extraction.zip'\n",
- "urllib.request.urlretrieve(boxlink, boxfile)\n",
- "zf = zipfile.ZipFile(boxfile, 'r')\n",
- "zf.extractall()\n",
- "# ...to here\n",
- "\n",
- "example_file = 'F170LP-G235M_MOS_observation-6_mod_correctedWCS_noflat_nooutlierdet_combined_s30263_'\n",
- "s2d_file = os.path.join('s2d_files', example_file+'s2d.fits')\n",
- "x1d_file = os.path.join('x1d_files', example_file+'x1d.fits')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "fragment"
- }
- },
- "outputs": [],
- "source": [
- "data_model = ImageModel(s2d_file)\n",
- "resampled_2d_image = data_model.data # if multiple SCI extensions, also specify EXTVER\n",
- "weights_2d_image = data_model.wht # we will use this to estimate the per-pixel variance later\n",
- "\n",
- "image_shape = resampled_2d_image.shape\n",
- "print(image_shape) #note the swap of x and y"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "When we want to view 2d spectra, we'll generally need to stretch the pixels vertically to get a useful image. We can do this by setting the plot aspect ratio explicitly (we'll try to retain a measure of rectangularity)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "norm = simple_norm(resampled_2d_image, stretch='power')\n",
- "aspect_ratio = image_shape[1] / (2 * image_shape[0])\n",
- "fig1 = plt.figure() # we save these in dummy variables to avoid spurious Jupyter Notebook output\n",
- "img1 = plt.imshow(resampled_2d_image, cmap='gray', aspect=aspect_ratio, \n",
- " norm=norm, interpolation='none')\n",
- "clb1 = plt.colorbar()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Optimal Extraction algorithm"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here is an outline of the steps we'll be following:\n",
- "1. [Define an extraction region on the 2D image](#Define-an-extraction-region)\n",
- "1. [Identify a high S/N cross-dispersion (binned & coadded) slice to use for the initial kernel fit](#Create-kernel-slice)\n",
- "3. [Define the extraction kernel](#Define-the-extraction-kernel)\n",
- " 1. Single or composite PSF\n",
- " 1. Polynomial fit to background\n",
- "4. [Fit extraction kernel to initial slice](#Fit-extraction-kernel)\n",
- "5. ***Skipped:*** [*Fit geometric distortion*](#Fit-geometric-distortion-(skipped))\n",
- " 1. *Determine cross-dispersion bins for trace fitting*\n",
- " 1. *First-pass fit of kernel to each bin to find trace center*\n",
- " 1. *Polynomial fit of trace centers*\n",
- "6. [Combine composite model (kernel | trace) with 2D image to create output 1D spectrum](#Construct-final-1D-spectrum)\n",
- "7. Compare output spectrum with catalog photometry for flux calibration (not sure how to do this yet)\n",
- "\n",
- "Appendices:\n",
- "- [Appendix A: Batch Processing](#Appendix-A:-Batch-Processing)\n",
- "- [Appendix B: WebbPSF](#Appendix-B:-WebbPSF)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Developer Note:*\n",
- "\n",
- "This sort of functionality is desired by many, and as of yet, no general-purpose optimal extraction Python packages exist. While this notebook can provide optimal extraction for 2D resampled JWST pipeline products, and could be adapted for use with other data, it is a far cry from a widely-applicable, maintained and updated spectral extraction codebase. It would be very nice if such a thing existed...!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Define an extraction region"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We begin by identifying the region in the 2D resampled image which contains the spectral trace we want to extract. For a simple case with only a single source, we can theoretically use the entire image. However, we may still want to exclude large systematic fluctuations in the background which might complicate the fit, or part of the trace with essentially no signal which will make fitting the trace centers difficult. In addition, when working with background nod-subtracted data, the images will contain negative traces, which we will want to exclude."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can attempt to do this interactively, using sliders to define the bounding box. \n",
- "\n",
- "(Note that sliders with large ranges will jump more than one value at a time; for finer control, select a slider with the cursor and then use the up and down arrow keys to increment or decrement by one pixel.)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fig2 = plt.figure(figsize=(9,9)) # we want the largest figure that will fit in the notebook\n",
- "img2 = plt.imshow(resampled_2d_image, cmap='gray', aspect=aspect_ratio, \n",
- " norm=norm, interpolation='none') # reuse norm from earlier\n",
- "\n",
- "# create region box and slider\n",
- "region_x = region_y = 0\n",
- "region_h, region_w = image_shape\n",
- "region_rectangle = Rectangle((region_x, region_y), region_w, region_h, \n",
- " facecolor='none', edgecolor='b', linestyle='--')\n",
- "current_axis = plt.gca()\n",
- "current_axis.add_patch(region_rectangle)\n",
- "\n",
- "# interactive widget controls\n",
- "def region(x1=0, y1=0, x2=region_w-1, y2=region_h-1):\n",
- " region_rectangle.set_bounds(x1, y1, x2-x1, y2-y1)\n",
- " plt.draw()\n",
- " \n",
- "interact1 = interact(region, x1=(0, region_w-2, 1), y1=(0, region_h-2, 1), \n",
- " x2=(1, region_w-1, 1), y2=(1, region_h-1, 1))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We get the region coordinates from the bounding rectangle -- in this case, setting the coordinates to `x1=51, y1=3, x2=1268, y2=9` seems fine -- or, we can set them directly. Finally, we create a new array containing only our extraction region (so that we don't need to continually index our original array)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#comment these lines out if interativity is not desired\n",
- "x, y = region_rectangle.xy\n",
- "w = region_rectangle.get_width() \n",
- "h = region_rectangle.get_height()\n",
- "\n",
- "#uncomment and set these to your desired extraction region if interativity is not desired\n",
- "# x = y = 0\n",
- "# h, w = image_shape\n",
- "\n",
- "print(x, y, x+w, y+h)\n",
- "\n",
- "er_y, er_x = np.mgrid[y:y+h, x:x+w]\n",
- "extraction_region = resampled_2d_image[er_y, er_x]\n",
- "weights_region = weights_2d_image[er_y, er_x]\n",
- "er_ny, er_nx = extraction_region.shape\n",
- "\n",
- "aspect_ratio = er_nx / (3. * er_ny)\n",
- "\n",
- "er_norm = simple_norm(extraction_region, stretch='power')\n",
- "fig3 = plt.figure()\n",
- "img3 = plt.imshow(extraction_region, cmap='gray', aspect=aspect_ratio, \n",
- " norm=er_norm, interpolation='none')\n",
- "clb3 = plt.colorbar()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "(To adjust the region at this point, re-run *both* of the previous cells - the sliders need to be reset.)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Create kernel slice"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We now define a cross-dispersion slice of our extraction region with which to fit our initial extraction kernel. As an initial guess, we'll coadd the 30 columns centered on the middle of the trace."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "slice_width = 30\n",
- "initial_column = er_nx // 2\n",
- "\n",
- "def kernel_slice_coadd(width, column_idx):\n",
- " \"\"\"\n",
- " Coadd a number of columns (= width) of the extraction region,\n",
- " centered on column_idx.\n",
- " \"\"\"\n",
- " \n",
- " half_width = width // 2\n",
- " to_coadd = np.arange(max(0, column_idx - half_width), \n",
- " min(er_nx-1, column_idx + half_width))\n",
- " return extraction_region[:, to_coadd].sum(axis=1) / width\n",
- "\n",
- "slice_0 = kernel_slice_coadd(slice_width, initial_column)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Next, we'll plot the resulting slice, and (interactively) adjust the width and center of the coadd region."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fig4, (iax4, pax4) = plt.subplots(nrows=2, ncols=1, figsize=(8, 12))\n",
- "plt.subplots_adjust(hspace=0.15, top=0.95, bottom=0.05)\n",
- "img4 = iax4.imshow(extraction_region, cmap='gray', aspect=aspect_ratio, \n",
- " norm=er_norm, interpolation='none')\n",
- "\n",
- "#create slice box\n",
- "def make_slice(width, column_idx):\n",
- " sy, sh, sw = 0, er_ny, width\n",
- " sx = column_idx - width // 2\n",
- " return sx, sy, sw, sh\n",
- "\n",
- "*sxy, sw, sh = make_slice(slice_width, initial_column)\n",
- "slice_rectangle = Rectangle(sxy, sw, sh, facecolor='none', \n",
- " edgecolor='b', linestyle='--')\n",
- "iax4.add_patch(slice_rectangle)\n",
- "\n",
- "#plot the coadded slice\n",
- "xd_pixels = np.arange(er_ny)\n",
- "lin4, = pax4.plot(xd_pixels, slice_0, 'k-')\n",
- "pax4.set_xlabel('Cross-dispersion pixel')\n",
- "pax4.axes.set_ylabel('Coadded signal')\n",
- "\n",
- "column_slider = widgets.IntSlider(initial_column, 0, er_nx-1, 1)\n",
- "width_slider = widgets.IntSlider(slice_width, 1, er_nx-1, 1)\n",
- "\n",
- "#interactive controls\n",
- "def slice_update(column_idx, width):\n",
- " #update rectangle\n",
- " new_slice_box = make_slice(width, column_idx)\n",
- " slice_rectangle.set_bounds(*new_slice_box)\n",
- " #update line plot\n",
- " lin4.set_ydata(kernel_slice_coadd(width, column_idx))\n",
- " #update the axis limits\n",
- " pax4.relim()\n",
- " pax4.autoscale_view()\n",
- " plt.draw()\n",
- "\n",
- "interact2 = interact(slice_update, column_idx=column_slider, width=width_slider)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "A column index of 670 and width 50 seem to work reasonably well for this file, so we can now generate the final slice for kernel fitting."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "kernel_slice = kernel_slice_coadd(width_slider.value, column_slider.value)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Define the extraction kernel"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now we will define an extraction kernel which will be used to fit our trace at each pixel in the dispersion direction. This kernel will be made of 2 parts:\n",
- "- a PSF template (or a composite of multiple PSFs, for deblending purposes)\n",
- "- a polynomial for background fitting"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Select a PSF template"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "There are many options for PSF template that we could consider for our kernel, but a full comparison is outside the scope of this notebook. We will be demonstrating only Gaussian and Moffat profiles.\n",
- "\n",
- "There are two things to note:\n",
- "1. The methods shown here are only applicable to a true point source. Extended sources require a different methodology.\n",
- "2. The `WebbPSF` package can be used to directly construct a composite PSF from the instrument model; however, this process is far more arduous than fitting a 1D profile using the `astropy.modeling` tools, and has thus been banished to Appendix B."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We start by plotting the two profiles against the kernel slice, with a naive normalization so that we can ignore scaling for the time being, centered on the pixel with the kernel's maximum value. (We will perform a true fit later, don't worry!)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "max_pixel = np.argmax(kernel_slice)\n",
- "fwhm = 1.\n",
- "\n",
- "moffat_profile = models.Moffat1D(amplitude=1, gamma=fwhm, x_0=max_pixel, alpha=1)\n",
- "gauss_profile = models.Gaussian1D(amplitude=1, mean=max_pixel, stddev=fwhm)\n",
- "\n",
- "fig5 = plt.figure()\n",
- "kern5 = plt.plot(xd_pixels, kernel_slice / kernel_slice[max_pixel], label='Kernel Slice')\n",
- "moff5 = plt.plot(xd_pixels, moffat_profile(xd_pixels), label='Moffat Profile')\n",
- "gaus5 = plt.plot(xd_pixels, gauss_profile(xd_pixels), label='Gaussian Profile')\n",
- "lgd5 = plt.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The Gaussian profile looks like a better approximation, so that's the profile we'll use for this spectrum. In the cell below, we could add more PSF templates using [model operations](https://docs.astropy.org/en/stable/modeling/compound-models.html); this is left as an exercise for the reader.\n",
- "\n",
- "We need to de-normalize our amplitude, so we'll set it to the maximum pixel value of the slice."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "psf_template = gauss_profile\n",
- "psf_template.amplitude = kernel_slice[max_pixel]\n",
- "print(psf_template)\n",
- "# If deblending multiple sources, add more PSF templates here:\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Polynomial background"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We will fit the background with a polynomial. Some experimentation is recommended to find the polynomial degree which best fits the data; for this example, we'll use a 2nd-degree polynomial.\n",
- "\n",
- "For nod-subtracted data, there may not be enough pixels in the extraction region to accurately fit a residual. In such cases, use a 0th-order polynomial or a `Const1D` model for the background; to avoid fitting the background at all, set the parameter to `fixed = True`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "background_poly = models.Polynomial1D(2)\n",
- "print(background_poly)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The final step is to combine the PSF(s) and the background to create our compound model."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "extraction_kernel = psf_template + background_poly\n",
- "print(extraction_kernel)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Fit extraction kernel"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now that we have an extraction kernel, we want to fit it to our kernel slice, so as to have the best tool for fitting trace centers in the next step. We also plot the fit components, as well as the fit vs the kernel slice, as visual checks; if they are unacceptable, we can go back to the previous section, tweak parameters, and try again."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fitter = fitting.LevMarLSQFitter()\n",
- "fit_extraction_kernel = fitter(extraction_kernel, xd_pixels, kernel_slice)\n",
- "print(fit_extraction_kernel)\n",
- "\n",
- "fit_line = fit_extraction_kernel(xd_pixels)\n",
- "\n",
- "fig6, (fax6, fln6) = plt.subplots(nrows=2, ncols=1, figsize=(8, 12))\n",
- "plt.subplots_adjust(hspace=0.15, top=0.95, bottom=0.05)\n",
- "psf6 = fax6.plot(xd_pixels, fit_extraction_kernel[0](xd_pixels), label=\"PSF\")\n",
- "poly6 = fax6.plot(xd_pixels, fit_extraction_kernel[1](xd_pixels), label=\"Background\")\n",
- "sum6 = fax6.plot(xd_pixels, fit_line, label=\"Composite Kernel\")\n",
- "lgd6a = fax6.legend()\n",
- "lin6 = fln6.plot(xd_pixels, kernel_slice, label='Kernel Slice')\n",
- "fit6 = fln6.plot(xd_pixels, fit_line, 'o', label='Extraction Kernel')\n",
- "lgd6b = fln6.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Wavelength-varying FWHM (skipped)\n",
- "\n",
- "The NIRSpec PSF width changes with wavelength, and so for science data, it may be beneficial to fit multiple locations along the spectral trace. Below is a demonstration of the process; note, however, for this example dataset, the (not-yet-optimized) resampling and combining of the dithered input spectra introduces a width variation artifact, so we will not actually be using the results of this step for the extraction."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "If we wish to account for a varying FWHM, we can bin the 2D spectrum in the dispersion direction and fit each bin. The kernel we defined above can act as our initial estimate, which can be helpful in very faint regions of the spectrum, since `astropy.modeling` fitting routines can be sensitive to initial estimates.\n",
- "\n",
- "(Once the binned kernel FWHMs have been calculated and plotted, the next step would be to find an appropriate model and fit the FWHM as a function of bin center. The fit model would then be included in the final 1D extraction below.)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from astropy.stats import sigma_clip\n",
- "\n",
- "n_bin = 100\n",
- "bin_width = er_nx // n_bin\n",
- "bin_centers = np.arange(0, er_nx, bin_width+1, dtype=float) + bin_width // 2\n",
- "binned_spectrum = np.hstack([extraction_region[:, i:i+bin_width+1].sum(axis=1)[:, None] \n",
- " for i in range(0, er_nx, bin_width+1)])\n",
- "bin_fwhms = np.zeros_like(bin_centers, dtype=float)\n",
- "\n",
- "for y in range(bin_centers.size):\n",
- " bin_fit = fitter(fit_extraction_kernel, xd_pixels, binned_spectrum[:, y])\n",
- " bin_fwhms[y] = bin_fit.stddev_0.value\n",
- " \n",
- "bin_ny, bin_nx = binned_spectrum.shape\n",
- "bin_ar = bin_nx / (3 * bin_ny)\n",
- "\n",
- "fig_fwhm, ax_fwhm = plt.subplots(nrows=2, ncols=1, figsize=(6, 10))\n",
- "plt.subplots_adjust(hspace=0.05)\n",
- "fwhm_img = ax_fwhm[0].imshow(binned_spectrum, aspect=bin_ar, interpolation='none',\n",
- " cmap='gray')\n",
- "fwhm_plot = ax_fwhm[1].plot(bin_centers, bin_fwhms)\n",
- "xlbl_fwhm = ax_fwhm[1].set_xlabel(\"Bin center (px)\")\n",
- "ylbl_fwhm = ax_fwhm[1].set_ylabel(\"FWHM (arcsec)\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Fit geometric distortion *(skipped)*"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The pipeline `resample` step drizzles all input 2d spectra onto a rectified grid, so this particular step of our optimal extraction process is not typically necessary. A brief discussion of the procedure is included here as a guideline for extracting unrectified spectra (with the suffix `_cal.fits`), where the trace can have significant curvature and the trace dispersion is not column-aligned."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Define bins for trace fitting\n",
- "\n",
- "Depending on how noisy the 2D resampled spectrum is, it may be beneficial to define bins in the dispersion direction. These can be evenly- or unevenly-spaced, and once they're defined, coadd the columns in each bin (possibly using the `WHT` extension in the `s2d` file) and create an array of bin center locations.\n",
- "\n",
- "If the 2D spectrum has high S/N, this may not be necessary, and each cross-dispersed column can be fit individually in the next step."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Fit each bin with a modified extraction kernel\n",
- "\n",
- "We want to fit each of the defined bins with our extraction kernel, but we don't want any other artifacts or noise to confuse the trace. So, we copy the extraction kernel, then set each parameter other than the profile center (`mean_0` in the example above) to `fixed = True`. Starting at one end of the trace, iterate over each bin to fit the slice with the extraction kernel, and store the resulting trace centers in an array."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Fit the trace centers with a 1D polynomial\n",
- "\n",
- "This step is straightforward: create a `Polynomial1D` model, then fit it to the trace centers from the previous step.\n",
- "\n",
- "Since we won't be fitting, instead we'll create a placeholder trace center model: a 0th-order polynomial."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "trace_center_model = models.Polynomial1D(0) #we use a constant because the spectrum has already been rectified\n",
- "trace_center_model.c0 = fit_extraction_kernel.mean_0.value # use the parameter for center of the PSF profile\n",
- "print(trace_center_model)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Construct final 1D spectrum"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We calculate the final 1D spectrum as a weighted sum in the cross-dispersion direction the 2D spectrum, using our composite model (the extraction kernel centered on the trace) for the weights. We also need to incorporate the variance for each pixel, which we'll estimate from the `WHT` extension output by the resample step."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Create a variance image\n",
- "\n",
- "Horne's algorithm requires the variance for each pixel. Errors are not currently propagated through the resample step; however, as per the [DrizzlePac Handbook](https://www.stsci.edu/files/live/sites/www/files/home/scientific-community/software/drizzlepac/_documents/drizzlepac-handbook.pdf), we can estimate the variance from the drizzle weights image: $ Var \\approx 1 / (W \\times s^4) $, where $s$ is the pixel scale. Currently, the NIRSpec drizzle parameters are set to `PIXFRAC = 1.0`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "scale = 1.0 # adjust this if and when the NIRSpec PIXFRAC changes\n",
- "\n",
- "# We want any pixel with 0 weight to be excluded from the calculation\n",
- "# in the next step, so we'll use masked array operations.\n",
- "bad_pixels = weights_region == 0\n",
- "masked_wht = np.ma.array(weights_region, mask=bad_pixels)\n",
- "variance_image = np.ma.divide(1., weights_region * scale**4)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can display the variance image to see if there are any regions of the extraction region which will not be included in the spectrum (indicated in red below). For this particular example spectrum, every pixel has a nonzero weight."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from copy import copy\n",
- "\n",
- "fig_var = plt.figure()\n",
- "palette = copy(plt.cm.gray)\n",
- "palette.set_bad('r', alpha=0.7)\n",
- "var_norm = simple_norm(variance_image, stretch='log', min_cut=0.006, max_cut=0.1)\n",
- "img_var = plt.imshow(variance_image, interpolation='none', aspect=aspect_ratio, norm=var_norm, cmap=palette)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Generate the 1D spectrum\n",
- "\n",
- "Now, we finally calculate our 1D spectrum, summing over cross-dispersed columns:\n",
- "$$S_x = \\frac{1}{G_x}\\sum_{y} \\frac{I_{xy}\\cdot K_y(x)}{V_{xy}}$$\n",
- "where $I$ is the pixel value in the 2D resampled image, $K$ is our extraction kernel set to the column's trace center, $V$ is the pixel value in the variance image, and $G$ is the kernel normalization given by:\n",
- "$$G_x = \\sum_y \\frac{K_y^2(x)}{V_{xy}}$$"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "spectrum = np.zeros(er_nx, dtype=float) #initialize our spectrum with zeros\n",
- "column_pixels = np.arange(er_nx)\n",
- "trace_centers = trace_center_model(column_pixels) # calculate our trace centers array\n",
- "\n",
- "# Loop over columns\n",
- "for x in column_pixels:\n",
- " # create the kernel for this column, using the fit trace centers\n",
- " kernel_column = fit_extraction_kernel.copy()\n",
- " kernel_column.mean_0 = trace_centers[x]\n",
- " # kernel_column.stddev_0 = fwhm_fit(x) # if accounting for a varying FWHM, uncomment this line.\n",
- " kernel_values = kernel_column(xd_pixels)\n",
- " \n",
- " # unit normalize the kernel following Horne1986 eqn 5, P_x = P/sum(P_x). \n",
- " kernel_values = np.ma.masked_outside(kernel_values, 0, np.inf) \n",
- " kernel_values = kernel_values/np.ma.sum(kernel_values)\n",
- " \n",
- " # isolate the relevant column in the spectrum and variance images\n",
- " variance_column = variance_image[:, x] # remember that numpy arrays are row, column\n",
- " image_pixels = extraction_region[:, x]\n",
- " \n",
- " # calculate the kernal normalization\n",
- " g_x = np.ma.sum(kernel_values**2 / variance_column)\n",
- " if np.ma.is_masked(g_x): #this column isn't valid, so we'll skip it\n",
- " continue\n",
- " \n",
- " # and now sum the weighted column\n",
- " weighted_column = np.ma.divide(image_pixels * kernel_values, variance_column)\n",
- " spectrum[x] = np.ma.sum(weighted_column) / g_x"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We need a wavelength array to display the spectrum, which we can create from the WCS object stored in the data model's metadata."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "wcs = data_model.meta.wcs\n",
- "print(wcs.__repr__())\n",
- "alpha_C, delta_C, y = wcs(er_x, er_y)\n",
- "wavelength = y[0]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fig7 = plt.figure()\n",
- "spec7 = plt.plot(wavelength, spectrum)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Write the extracted spectrum out to a file\n",
- "# This is left as an exercise for the reader\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We also want to compare our optimally-extracted spectrum with the `x1d` pipeline product. We'll normalize the spectra so we can plot them on the same axes.\n",
- "\n",
- "(Note that the `x1d` spectrum includes negative traces from the background subtraction step, which usually results in a negative flux calculation. We need to correct for that when comparing with our optimally-extracted version.)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Developer Note: We will not use datamodels here because the latest packages do not support the simulated data previously created for this notebook. The data set will have to be updated after commissioning."
- ]
- },
- {
- "cell_type": "raw",
- "metadata": {},
- "source": [
- "x1d_model = MultiSpecModel(x1d_file)\n",
- "# For a file with multiple spectra, the index to .spec is EXTVAR-1\n",
- "x1d_wave = x1d_model.spec[0].spec_table.WAVELENGTH\n",
- "x1d_flux = x1d_model.spec[0].spec_table.FLUX \n",
- "if x1d_flux.sum() <= 0:\n",
- " x1d_flux = -x1d_flux\n",
- "fig8 = plt.figure()\n",
- "x1d8 = plt.plot(x1d_wave, x1d_flux / x1d_flux.max(), label=\"Pipeline\")\n",
- "opt8 = plt.plot(wavelength, spectrum / spectrum.max(), label=\"Optimal\", alpha=0.7)\n",
- "lgd8 = plt.legend()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "x1d_model = fits.open(x1d_file)\n",
- "# For a file with multiple spectra, the index to .spec is EXTVAR\n",
- "tmp = x1d_model[1].data\n",
- "x1d_wave = tmp['WAVELENGTH']\n",
- "x1d_flux = tmp['FLUX']\n",
- "\n",
- "if x1d_flux.sum() <= 0:\n",
- " x1d_flux = -x1d_flux\n",
- "fig8 = plt.figure()\n",
- "x1d8 = plt.plot(x1d_wave, x1d_flux / x1d_flux.max(), label=\"Pipeline\")\n",
- "opt8 = plt.plot(wavelength, spectrum / spectrum.max(), label=\"Optimal\", alpha=0.7)\n",
- "lgd8 = plt.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "---"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Appendix A: Batch Processing"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "When optimal extraction is desired for a large number of spectra, going step-by-step through the process laid out above for each spectrum may not be practical. In such cases, we can initially use those interactive methods on one or two spectra to make decisions about some of our extraction parameters (e.g., what PSF template profile to use, or what degree polynonmial to fit the background with), then use those parameters to process all of the spectra non-interactively. Afterwards, we can examine the output from each extracted spectrum and revisit any which need more individualized handling."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can extract a large number of spectra non-interactively by defining functions for each of the steps above, and a single master function to iterate over all the spectra in a single directory."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Define an extraction region\n",
- "\n",
- "There's no way to perform this step non-interactively, so we'll skip it here. However, there are two good ways (and one bad way) to deal with this for a real dataset:\n",
- "1. Define an extraction region for each 2D spectrum before batch processing. You can save the region bounding boxes to a python dictionary (or write them to a file, then read it in during iteration).\n",
- "1. Visually examine the 2D spectra, and only batch process those spectra for which a specific extraction region (i.e., smaller than the full 2D spectrum) doesn't need to be defined. The remainder of the spectra can be extracted individually.\n",
- "1. Skip this step, and assume that any spectra for which a specific extraction region would need to be defined will need individualized reprocessing anyway. This step is not recommended, but it is the one we will be using here."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Create Kernel Slice"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_kernel_slice(extraction_region, slice_width=30, column_idx=None):\n",
- " \"\"\"\n",
- " Create a slice in the cross-dispersion direction out of the \n",
- " 2D array `extraction_region`, centered on `column_idx` and \n",
- " `slice_width` pixels wide. If `column_idx` is not given, use\n",
- " the column with the largest total signal.\n",
- " \"\"\"\n",
- " \n",
- " if column_idx is None:\n",
- " column_idx = np.argmax(extraction_region.sum(axis=0))\n",
- " \n",
- " ny, nx = extraction_region.shape\n",
- " half_width = slice_width // 2\n",
- " \n",
- " #make sure we don't go past the edges of the extraction region\n",
- " to_coadd = np.arange(max(0, column_idx - half_width), \n",
- " min(nx-1, column_idx + half_width))\n",
- " \n",
- " return extraction_region[:, to_coadd].sum(axis=1) / slice_width\n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Create and fit the extraction kernel"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_fit_extraction_kernel(xd_slice, psf_profile=models.Gaussian1D, \n",
- " height_param_name='amplitude', height_param_value=None,\n",
- " width_param_name='stddev', width_param_value=1.,\n",
- " center_param_name='mean', center_param_value=None,\n",
- " other_psf_args=[], other_psf_kw={},\n",
- " bg_model=models.Polynomial1D,\n",
- " bg_args=[3], bg_kw={}):\n",
- " \"\"\"\n",
- " Initialize a composite extraction kernel, then fit it to \n",
- " the 1D array `xd_slice`, which has been nominally\n",
- " generated via the `kernel_slice` function defined above. \n",
- " \n",
- " To allow for PSF template models with different parameter \n",
- " names, we use the `height_param_*`, `width_param_*`, and\n",
- " `center_param_*` keyword arguments. We collect any other\n",
- " positional or keyword arguments for the PSF model in \n",
- " `other_psf_*`. If the height or center values are `None`, \n",
- " they will be calculated from the data.\n",
- " \n",
- " Similarly, any desired positional or keyword arguments to\n",
- " the background fit model (default `Polynomial1D`) are\n",
- " accepted via `bg_args` and `bg_kw`.\n",
- " \n",
- " Note that this function can not handle cases which involve\n",
- " multiple PSFs for deblending. It is recommended to process\n",
- " such spectra individually, using the interactive procedure\n",
- " above.\n",
- " \"\"\"\n",
- " xd_pixels = np.arange(xd_slice.size)\n",
- " \n",
- " if center_param_value is None:\n",
- " center_param_value = np.argmax(xd_slice)\n",
- " \n",
- " if height_param_value is None:\n",
- " # In case of non-integer values passed via center_param_value,\n",
- " # we need to interpolate.\n",
- " slice_interp = interp1d(xd_pixels, xd_slice)\n",
- " height_param_value = slice_interp(center_param_value)\n",
- " \n",
- " # Create the PSF and the background models\n",
- " psf_kw = dict([(height_param_name, height_param_value), \n",
- " (width_param_name, width_param_value),\n",
- " (center_param_name, center_param_value)])\n",
- " psf_kw.update(other_psf_kw)\n",
- " psf = psf_profile(*other_psf_args, **psf_kw)\n",
- " \n",
- " bg = bg_model(*bg_args, **bg_kw)\n",
- " \n",
- " composite_kernel = psf + bg\n",
- " fitter = fitting.LevMarLSQFitter()\n",
- " return fitter(composite_kernel, xd_pixels, xd_slice)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Account for varying FWHM\n",
- "\n",
- "This is left as an exercise for the user, as per the process shown [here](#Wavelength-varying-FWHM). Note that `batch_extract_spectrum` and `batch_optimal_extraction` below will also need to be modified to incorporate this function, if desired."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_vary_fwhm(extraction_region, kernel):\n",
- " pass # implement a function which fits a wavelength-varying FWHM"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Fit the trace centers\n",
- "\n",
- "If this is required, replace this with a real function that does the fitting."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_fit_trace_centers(extraction_region, kernel,\n",
- " trace_model=models.Polynomial1D,\n",
- " trace_args=[0], trace_kw={}):\n",
- " \"\"\"\n",
- " Fit the geometric distortion of the trace with\n",
- " a model. Currently this is a placeholder function,\n",
- " since geometric distortion is typically removed\n",
- " during the `resample` step. However, if this\n",
- " functionality is necesary, use this function\n",
- " signature to remain compatible with the rest of\n",
- " this Appendix.\n",
- " \"\"\"\n",
- " \n",
- " trace_centers = trace_model(*trace_args, **trace_kw)\n",
- " trace_centers.c0 = kernel.mean_0\n",
- " return trace_centers"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Generate the 1D spectrum\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_extract_spectrum(extraction_region, trace, kernel, \n",
- " weights_image, \n",
- " trace_center_param='mean',\n",
- " scale=1.0):\n",
- " \"\"\"\n",
- " Optimally extract the 1D spectrum from the extraction \n",
- " region.\n",
- " \n",
- " A variance image is created from `weights_image` (which \n",
- " should have the same dimensions as `extraction_region`).\n",
- " Then, for each column of the spectrum, we sum the aperture\n",
- " as per the equations defined above, masking pixels with\n",
- " zero weights. \n",
- " \n",
- " Note that unlike the interactive, step-by-step method, \n",
- " here we will vectorize for speed. This requires using\n",
- " a model set for the kernel, but this is allowed since\n",
- " we are not fitting anything.\n",
- " \n",
- " `trace_center_param` is the name of the parameter which \n",
- " will defines the trace centers, *without the model number\n",
- " subscript* (since we will be dealing with the components\n",
- " individually).\n",
- " \n",
- " `scale` is the size ratio of input to output pixels when\n",
- " drizzling, equivalent to PIXFRAC in the drizzle parameters\n",
- " from the `resample` step.\n",
- " \"\"\"\n",
- " \n",
- " bad_pixels = weights_image == 0.\n",
- " masked_wht = np.ma.array(weights_image, mask=bad_pixels)\n",
- " variance_image = np.ma.divide(1., masked_wht * scale**4)\n",
- " \n",
- " ny, nx = extraction_region.shape\n",
- " trace_pixels = np.arange(nx)\n",
- " xd_pixels = np.arange(ny)\n",
- " trace_centers = trace(trace_pixels) # calculate our trace centers array\n",
- " \n",
- " # Create kernel image for vectorizing, which requires some gymnastics...\n",
- " # ******************************************************************\n",
- " # * IMPORTANT: *\n",
- " # * ---------- *\n",
- " # * Note that because of the way model sets are implemented, it is *\n",
- " # * not feasible to alter an existing model instance to use them. *\n",
- " # * Instead we'll create a new kernel instance, using the fitted *\n",
- " # * parameters from the original kernel. *\n",
- " # * *\n",
- " # * Caveat: this assumes that the PSF is the first element, and *\n",
- " # * the background is the second. If you change that when creating *\n",
- " # * your composite kernel, make sure you update this section *\n",
- " # * similarly, or it will not work! *\n",
- " # ******************************************************************\n",
- " psf0, bg0 = kernel\n",
- " psf_params = {}\n",
- " for pname, pvalue in zip(psf0.param_names, psf0.parameters):\n",
- " if pname == trace_center_param:\n",
- " psf_params[pname] = trace_centers\n",
- " else:\n",
- " psf_params[pname] = np.full(nx, pvalue)\n",
- " psf_set = psf0.__class__(n_models=nx, **psf_params)\n",
- " #if not using Polynomial1D for background model, edit this:\n",
- " bg_set = bg0.__class__(len(bg0.param_names)-1, n_models=nx)\n",
- " for pname, pvalue in zip(bg0.param_names, bg0.parameters):\n",
- " setattr(bg_set, pname, np.full(nx, pvalue))\n",
- " kernel_set = psf_set + bg_set\n",
- " # We pass model_set_axis=False so that every model in the set \n",
- " # uses the same input, and we transpose the result to fix the\n",
- " # orientation.\n",
- " kernel_image = kernel_set(xd_pixels, model_set_axis=False).T\n",
- " \n",
- " # Now we perform our weighted sum, using numpy.ma routines\n",
- " # to preserve our masks\n",
- " \n",
- " # unit normalize the kernel following Horne1986 eqn 5, P_x = P/sum(P_x). \n",
- " kernel_image = np.ma.masked_outside(kernel_image, 0, np.inf) \n",
- " kernel_image = kernel_image/np.ma.sum(kernel_image)\n",
- " \n",
- " g = np.ma.sum(kernel_image**2 / variance_image, axis=0)\n",
- " weighted_spectrum = np.ma.divide(kernel_image * extraction_region, variance_image)\n",
- " spectrum1d = np.ma.sum(weighted_spectrum, axis=0) / g\n",
- " \n",
- " # Any masked values we set to 0.\n",
- " return spectrum1d.filled(0.)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Convenience functions"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_wavelength_from_wcs(datamodel, pix_x, pix_y):\n",
- " \"\"\"\n",
- " Convenience function to grab the WCS object from the\n",
- " datamodel's metadata, generate world coordinates from\n",
- " the given pixel coordinates, and return the 1D \n",
- " wavelength.\n",
- " \"\"\"\n",
- " \n",
- " wcs = datamodel.meta.wcs\n",
- " aC, dC, y = wcs(pix_x, pix_y)\n",
- " return y[0]\n",
- "\n",
- "def batch_save_extracted_spectrum(filename, wavelength, spectrum):\n",
- " \"\"\"\n",
- " Quick & dirty fits dump of an extracted spectrum.\n",
- " Replace with your preferred output format & function.\n",
- " \"\"\"\n",
- " \n",
- " wcol = fits.Column(name='wavelength', format='E', \n",
- " array=wavelength)\n",
- " scol = fits.Column(name='spectrum', format='E',\n",
- " array=spectrum)\n",
- " cols = fits.ColDefs([wcol, scol])\n",
- " hdu = fits.BinTableHDU.from_columns(cols)\n",
- " hdu.writeto(filename, overwrite=True)\n",
- "\n",
- "def batch_plot_output(resampled_image, extraction_bbox, \n",
- " kernel_slice, kernel_model,\n",
- " wavelength, spectrum, filename):\n",
- " \"\"\"\n",
- " Convenience function for summary output figures,\n",
- " allowing visual inspection of the results from \n",
- " each file being processed.\n",
- " \"\"\"\n",
- " \n",
- " fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, \n",
- " figsize=(8,12))\n",
- " fig.suptitle(filename)\n",
- " \n",
- " ny, nx = resampled_image.shape\n",
- " aspect = nx / (2 * ny)\n",
- " \n",
- " # Subplot 1: Extraction region\n",
- " power_norm = simple_norm(resampled_image, 'power')\n",
- " er_img = ax1.imshow(resampled_image, interpolation='none',\n",
- " aspect=aspect, norm=power_norm, cmap='gray')\n",
- " rx, ry, rw, rh = extraction_bbox\n",
- " region = Rectangle((rx, ry), rw, rh, facecolor='none', \n",
- " edgecolor='b', linestyle='--')\n",
- " er_ptch = ax1.add_patch(region)\n",
- " \n",
- " # Subplot 2: Kernel fit\n",
- " xd_pixels = np.arange(kernel_slice.size)\n",
- " fit_line = kernel_model(xd_pixels)\n",
- " ks_line = ax2.plot(xd_pixels, kernel_slice, label='Kernel Slice')\n",
- " kf_line = ax2.plot(xd_pixels, fit_line, 'o', label='Extraction Kernel')\n",
- " k_lgd = ax2.legend()\n",
- " \n",
- " # Subplot 3: Extracted spectrum\n",
- " spec_line = ax3.plot(wavelength, spectrum)\n",
- " \n",
- " fig.savefig(filename, bbox_inches='tight')\n",
- " plt.close(fig)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Iterate over the desired files\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def batch_optimal_extraction(file_list):\n",
- " \"\"\"\n",
- " Iterate over a list of fits file paths, optimally extract\n",
- " the SCI extension in each file, generate an output summary\n",
- " image, and then save the resulting spectrum.\n",
- " \n",
- " Note that in the example dataset, there is only one SCI\n",
- " extension in each file. For data with multiple SCI \n",
- " extensions, a second loop over those extensions is\n",
- " required.\n",
- " \"\"\"\n",
- " \n",
- " # For this example data, we'll just use the default values\n",
- " # for all the functions\n",
- " for i, fitsfile in enumerate(file_list):\n",
- " print(\"Processing file {} of {}: {}\".format(i+1, len(file_list), fitsfile))\n",
- " dmodel = ImageModel(fitsfile)\n",
- " spec2d = dmodel.data\n",
- " wht2d = dmodel.wht\n",
- " \n",
- " k_slice = batch_kernel_slice(spec2d)\n",
- " k_model = batch_fit_extraction_kernel(k_slice)\n",
- " trace = batch_fit_trace_centers(spec2d, k_model)\n",
- " spectrum = batch_extract_spectrum(spec2d, trace, k_model, wht2d)\n",
- " \n",
- " ny, nx = spec2d.shape\n",
- " y2d, x2d = np.mgrid[:ny, :nx]\n",
- " wavelength = batch_wavelength_from_wcs(dmodel, x2d, y2d)\n",
- " \n",
- " bbox = [0, 0, nx-1, ny-1]\n",
- " \n",
- " outfile = fitsfile.replace('s2d.fits', 'x1d_optimal')\n",
- " \n",
- " batch_plot_output(spec2d, bbox, k_slice, k_model,\n",
- " wavelength, spectrum, \n",
- " outfile+'.png')\n",
- " batch_save_extracted_spectrum(outfile+'.fits', wavelength, spectrum)\n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Run on example dataset\n",
- "\n",
- "Take particular note of any spectrum which produces a warning during fitting - these are likely to be good candidates for interactive reprocessing."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Developer Note:*\n",
- "\n",
- "It would be great if there was a way to do this without spawning invisible plots from the creation of matplotlib figures, so that the `ioff` and `ion` calls could be removed."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.ioff() # if we don't turn this off, then matplotlib tries to display an (invisible) plot for each spectrum\n",
- "s2d_files = glob(os.path.join('s2d_files', '*s2d.fits'))\n",
- "batch_optimal_extraction(s2d_files)\n",
- "plt.ion() # now we turn it back on so everything else plots as it should!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "---"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Appendix B: WebbPSF"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Instead of using a PSF template, we can generate a PSF directly from the instrument model with [WebbPSF](https://webbpsf.readthedocs.io/en/stable/index.html). Currently, only the F110W and F140X imaging filters are supported, but we'll walk through the process anyway for whenever more filters become available.\n",
- "\n",
- "The primary function of WebbPSF is to produce imaging PSFs; however, it *can* generate a set of monochromatic PSFs, which we can combine."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`webbpsf` is only needed here so we import it at the start of this appendix:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from webbpsf import NIRSpec, display_psf"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "WebbPSF has a number of data files which are required to run, so we'll begin by verifying that they can be accessed (and downloading them if necessary).\n",
- "\n",
- "Note that you will see a big red error message if you have not yet downloaded the data files. Don't worry, as long as you see \"Downloading WebbPSF data files.\" everything is still proceeding as expected."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Developer Note:*\n",
- "\n",
- "WebbPSF should be updated so that the red error doesn't appear. See https://github.com/spacetelescope/webbpsf/issues/380"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "try:\n",
- " instrument = NIRSpec()\n",
- "except OSError:\n",
- " # assume that WebbPSF data files have not been downloaded\n",
- " import tarfile, sys\n",
- " print(\"Downloading WebbPSF data files.\")\n",
- " webb_url = \"https://stsci.box.com/shared/static/qxpiaxsjwo15ml6m4pkhtk36c9jgj70k.gz\"\n",
- " webb_file = os.path.join('.', \"webbpsf-data-1.2.1.tar.gz\")\n",
- " urllib.request.urlretrieve(webb_url, webb_file)\n",
- " print(\"Extracting into ./webbpsf-data ...\")\n",
- " tar = tarfile.open(webb_file)\n",
- " tar.extractall()\n",
- " tar.close()\n",
- " os.environ[\"WEBBPSF_PATH\"] = os.path.join(\".\",\"webbpsf-data\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Instrument properties\n",
- "See the WebbPSF documentation for a full list of instrument settings."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "instrument = NIRSpec()\n",
- "print(instrument.filter_list)\n",
- "\n",
- "# For reference:\n",
- "allowed_masks = ('S200A1','S200A2','S400A1','S1600A1','S200B1', \n",
- " 'MSA all open', 'Single MSA open shutter', \n",
- " 'Three adjacent MSA open shutters')\n",
- "\n",
- "# Edit these as necessary\n",
- "instrument.filter = 'F110W' \n",
- "instrument.image_mask = 'Three adjacent MSA open shutters'"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Monochromatic PSFs\n",
- "\n",
- "The most rigorous method we could use is to generate a PSF for each wavelength in the 2D spectrum and combine all of them. However, the computation time and memory required for this method is generally very large unless the spectra are quite short (in the dispersion direction). A more reasonable method (which is what we will be doing here) is to create a subset of monochromatic PSFs spaced evenly across the wavelength range, and interpolate between them."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "psf_wavelengths = np.linspace(wavelength[0], wavelength[-1], num=10) * 1.0e-6 # wavelengths must be in meters\n",
- "\n",
- "cube_hdul = instrument.calc_datacube(psf_wavelengths) #the output is a HDUList\n",
- "psf_cube = cube_hdul[1].data\n",
- "psf_cube.shape"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#Display the contents of the data cube\n",
- "fig9, ax9 = plt.subplots(nrows=5, ncols=2, figsize=(8,12))\n",
- "plt.subplots_adjust(hspace=0.15, wspace=0.01, left=0.06, \n",
- " right=0.94, bottom=0.05, top=0.95)\n",
- "for row in range(5):\n",
- " for col in range(2):\n",
- " ax = ax9[row, col]\n",
- " w = row * 2 + col\n",
- " wl = psf_wavelengths[w]\n",
- " \n",
- " display_psf(cube_hdul, ax=ax, cube_slice=w,\n",
- " title=\"$\\lambda$ = {:.3f} $\\mu$m\".format(wl*1e6),\n",
- " vmax=.2, vmin=1e-4, ext=1, colorbar=False)\n",
- " ax.xaxis.set_visible(False)\n",
- " ax.yaxis.set_visible(False)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Interpolation Methods\n",
- "\n",
- "The method of interpolation we choose depends strongly on how the PSF varies with wavelength. For evaluating the different methods, we'll create another monochromatic PSF for comparison."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "reference_psf_hdul = instrument.calc_psf(monochromatic=3.0e-6)\n",
- "reference_psf = reference_psf_hdul[1].data\n",
- "ref_norm = simple_norm(reference_psf, stretch='log', min_cut=1e-4, max_cut=0.2)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The simplest way is a 3D linear interpolation, so let's see how it does. In the figure below, the top-left image is the reference PSF, the top-right is the linearly-interpolated PSF, the bottom left is a difference image, and the bottom right is a log-log plot of the pixel values in the reference (X) and interpolated (Y) PSFs."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "ref_pix = reference_psf >= 1e-4\n",
- "psf_x = psf_y = np.arange(48)\n",
- "out_x, out_y = np.meshgrid(psf_x, psf_y, indexing='ij')\n",
- "interpolator = RegularGridInterpolator((psf_wavelengths, psf_x, psf_y), psf_cube, method='linear')\n",
- "linear_psf = interpolator((3.0e-6, out_x, out_y))\n",
- "\n",
- "diff_lin_psf = reference_psf - linear_psf\n",
- "\n",
- "print(\"Reference: min {:.3e}, max {:.3e}\".format(reference_psf.min(), reference_psf.max()))\n",
- "print(\"Linear: min {:.3e}, max {:.3e}\".format(linear_psf.min(), linear_psf.max()))\n",
- "print(\"Diff: min {:.3e}, max {:.3e}\".format(diff_lin_psf.min(), diff_lin_psf.max()))\n",
- "print(\"Total error: {:.5e}\".format(np.sqrt((diff_lin_psf**2).sum())))\n",
- "\n",
- "figA, axA = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\n",
- "plt.subplots_adjust(wspace=0.01, left=0.05, right=0.95)\n",
- "axA[0, 0].imshow(reference_psf, interpolation='none', norm=ref_norm)\n",
- "axA[0, 0].xaxis.set_visible(False)\n",
- "axA[0, 0].yaxis.set_visible(False)\n",
- "axA[0, 1].imshow(linear_psf, interpolation='none', norm=ref_norm)\n",
- "axA[0, 1].xaxis.set_visible(False)\n",
- "axA[0, 1].yaxis.set_visible(False)\n",
- "axA[1, 0].imshow(diff_lin_psf, interpolation='none', vmin=-5e-4, vmax=5e-4)\n",
- "axA[1, 0].xaxis.set_visible(False)\n",
- "axA[1, 0].yaxis.set_visible(False)\n",
- "axA[1, 1].loglog(reference_psf[ref_pix], linear_psf[ref_pix], 'k+')\n",
- "axA[1, 1].set_aspect('equal', 'box')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The next method is more calculation-intensive, but could be more accurate. We go pixel-by-pixel through the PSF cube and interpolate with a 1D cubic spline along the wavelength axis."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "cubic_psf = np.zeros_like(psf_cube[0])\n",
- "for row in np.arange(48):\n",
- " for col in np.arange(48):\n",
- " spline = interp1d(psf_wavelengths, psf_cube[:, row, col], kind='cubic')\n",
- " cubic_psf[row, col] = spline(3.0e-6)\n",
- " \n",
- "diff_cub_psf = reference_psf - cubic_psf\n",
- "\n",
- "print(\"Reference: min {:.3e}, max {:.3e}\".format(reference_psf.min(), reference_psf.max()))\n",
- "print(\"Cubic: min {:.3e}, max {:.3e}\".format(cubic_psf.min(), cubic_psf.max()))\n",
- "print(\"Diff: min {:.3e}, max {:.3e}\".format(diff_cub_psf.min(), diff_cub_psf.max()))\n",
- "print(\"Total error: {:.5e}\".format(np.sqrt((diff_cub_psf**2).sum())))\n",
- "\n",
- "figB, axB = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\n",
- "plt.subplots_adjust(wspace=0.01, left=0.05, right=0.95)\n",
- "axB[0, 0].imshow(reference_psf, interpolation='none', norm=ref_norm)\n",
- "axB[0, 0].xaxis.set_visible(False)\n",
- "axB[0, 0].yaxis.set_visible(False)\n",
- "axB[0, 1].imshow(cubic_psf, interpolation='none', norm=ref_norm)\n",
- "axB[0, 1].xaxis.set_visible(False)\n",
- "axB[0, 1].yaxis.set_visible(False)\n",
- "axB[1, 0].imshow(diff_cub_psf, interpolation='none', vmin=-5e-4, vmax=5e-4)\n",
- "axB[1, 0].xaxis.set_visible(False)\n",
- "axB[1, 0].yaxis.set_visible(False)\n",
- "axB[1, 1].loglog(reference_psf[ref_pix], cubic_psf[ref_pix], 'k+')\n",
- "axB[1, 1].set_aspect('equal', 'box')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "While the log-log plot looks virtually identical to the linear case, the difference image in the spline case shows slightly larger errors in some of the central pixels. This is consistent with the \"total error\" statistic (the sum of squares of the difference image), which is larger in this second case.\n",
- "\n",
- "We can see in the plot below that the difference between the two methods is very slight, but the linearly-interpolated PSF is more accurate by about a factor of ~3 in total error."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "figC = plt.figure()\n",
- "plt.loglog(linear_psf[ref_pix], cubic_psf[ref_pix], 'k+')\n",
- "plt.xlabel('Linear interpolation')\n",
- "plt.ylabel('Cubic interpolation')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Full trace PSF\n",
- "\n",
- "Now we can generate a full PSF for the spectral trace. Note that the PSF at each wavelength is going to be a linear combination of the overlapping adjacent monochromatic PSFs. If geometric distortion is present, it may be beneficial to create this PSF *after* the trace centers have been fit."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "cube_w, cube_x, cube_y = np.meshgrid(wavelength * 1e-6, psf_x, psf_y, indexing='ij')\n",
- "full_psf_cube = interpolator((cube_w, cube_x, cube_y))\n",
- "nw, ny, nx = full_psf_cube.shape\n",
- "half = ny // 2\n",
- "trace = np.zeros((ny, nw), dtype=float)\n",
- "\n",
- "for wl, psf in enumerate(full_psf_cube):\n",
- " lo = wl - half\n",
- " lo_w = max(lo, 0)\n",
- " lo_x = lo_w - lo\n",
- " hi = wl + half\n",
- " hi_w = min(hi, nw)\n",
- " hi_x = nx - (hi - hi_w)\n",
- " trace[:, lo_w:hi_w] += psf[:, lo_x:hi_x]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "wpsf_aspect = nw / (2. * ny)\n",
- "figD = plt.figure(figsize=(10, 8))\n",
- "trace_norm = simple_norm(trace, stretch='log', min_cut=1e-4, max_cut=0.2)\n",
- "plt.imshow(trace, interpolation='none', aspect=wpsf_aspect, norm=trace_norm)\n",
- "plt.colorbar()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Resampling the trace\n",
- "\n",
- "Currently, our PSF array is not the same size or position as the trace in the extraction region. While we could shift and trim to the correct size, the spectrum will rarely be centered on a pixel, and is sufficiently under-sampled that fractional pixel shifts in the PSF could cause significant errors in the final extraction. Thus, we will perform a final resampling to the location of the spectrum in the extraction region. To do this, we can use our old friend `RegularGridInterpolator`. We set the center of the WebbPSF trace (originally at row 23) to our fit trace center, and resample appropriately."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "trace_row = np.arange(ny)\n",
- "trace_interpolator = RegularGridInterpolator((trace_row, wavelength), trace)\n",
- "center_0 = 23\n",
- "center_1 = fit_extraction_kernel.mean_0\n",
- "\n",
- "out_lo = center_0 - center_1\n",
- "out_hi = out_lo + er_ny\n",
- "\n",
- "resample_row = np.linspace(out_lo, out_hi, er_ny)\n",
- "resample_y, resample_w = np.meshgrid(resample_row, wavelength, indexing='ij')\n",
- "\n",
- "resampled_trace = trace_interpolator((resample_y, resample_w))\n",
- "\n",
- "figE, axE = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))\n",
- "plt.subplots_adjust(hspace=0.1)\n",
- "trace_renorm = simple_norm(resampled_trace, stretch='log')\n",
- "axE[0].imshow(resampled_trace, interpolation='none', aspect=aspect_ratio, norm=trace_renorm)\n",
- "axE[1].imshow(extraction_region, cmap='gray', aspect=aspect_ratio, \n",
- " norm=er_norm, interpolation='none')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "source": [
- "## About this notebook\n",
- "**Author:** Graham Kanarek, Staff Scientist, Science Support \n",
- "**Updated On:** 2020-07-13\n",
- "\n",
- "Optimal extraction algorithm adapted from [Horne (1986)](https://ui.adsabs.harvard.edu/abs/1986PASP...98..609H/)."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "***"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#top)\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.5"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/optimal_extraction_dynamic/requirements.txt b/notebooks/optimal_extraction_dynamic/requirements.txt
deleted file mode 100644
index 5326376cd..000000000
--- a/notebooks/optimal_extraction_dynamic/requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-astropy >= 5.3.3
-scipy >= 1.8.1
-ipywidgets >= 8.0.0
-jwst >= 1.11.4
-webbpsf >= 1.2.1
-stdatamodels >= 1.7.2
-specutils >= 1.11.0
\ No newline at end of file
diff --git a/notebooks/preimaging/.gitignore b/notebooks/preimaging/.gitignore
deleted file mode 100644
index 484a4ed24..000000000
--- a/notebooks/preimaging/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-__MACOSX
-preimaging.zip
-preimaging
diff --git a/notebooks/preimaging/environment.sh b/notebooks/preimaging/environment.sh
deleted file mode 100644
index c59491de8..000000000
--- a/notebooks/preimaging/environment.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/usr/bin/env bash
-
-export MIRAGE_DATA=/mnt/stsci/mirage
diff --git a/notebooks/preimaging/pre-requirements.txt b/notebooks/preimaging/pre-requirements.txt
deleted file mode 100644
index b139efe16..000000000
--- a/notebooks/preimaging/pre-requirements.txt
+++ /dev/null
@@ -1 +0,0 @@
-numpy==1.18.5
diff --git a/notebooks/preimaging/preimaging_01_mirage.ipynb b/notebooks/preimaging/preimaging_01_mirage.ipynb
deleted file mode 100644
index 155c0a930..000000000
--- a/notebooks/preimaging/preimaging_01_mirage.ipynb
+++ /dev/null
@@ -1,636 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# NIRCam Preimaging: MIRAGE Simulations"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Use case:** Simulation of NIRCam pre-imaging for NIRSpec.
\n",
- "**Data:** JWST simulated NIRCam data from MIRAGE; LMC.
\n",
- "**Tools:** mirage, jwst, astropy, grismconf, nircam_gsim.
\n",
- "**Cross-intrument:** NIRCam.
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
\n",
- "\n",
- "## Introduction\n",
- "\n",
- "\n",
- "This notebook shows step-by-step instructions to simulate images of the JWST LMC astrometric calibration field. The NIRCam images are simulated using the software [MIRAGE](https://jwst-docs.stsci.edu/jwst-other-tools/mirage-data-simulator). The observation is designed in APT. The APT output is used as input of MIRAGE.\n",
- "\n",
- "This Notebook must be executed from an environment that has MIRAGE installed. Follow the instructions in the [Installing MIRAGE webpage](https://mirage-data-simulator.readthedocs.io/en/latest/install.html) before executing this Jupyter Notebook. \n",
- "\n",
- "### MIRAGE Tutorials\n",
- "\n",
- "This notebook provides an example of running MIRAGE in a specific science use case. For a broader tutorial on running MIRAGE, it is suggested you review the [Jwebbinar Number 10](https://www.stsci.edu/jwst/science-execution/jwebbinars).\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "from glob import glob\n",
- "import shutil\n",
- "import yaml\n",
- "import zipfile\n",
- "import urllib.request\n",
- "\n",
- "os.environ[\"PYSYN_CDBS\"] = \"./grp/redcat/trds/\"\n",
- "synphot_folder = './grp'\n",
- "\n",
- "synExist = os.path.exists(synphot_folder)\n",
- "if not synExist:\n",
- " os.makedirs(synphot_folder)\n",
- " \n",
- "# mirage imports\n",
- "from mirage.imaging_simulator import ImgSim\n",
- "from mirage.seed_image import catalog_seed_image\n",
- "from mirage.dark import dark_prep\n",
- "from mirage.ramp_generator import obs_generator\n",
- "from mirage.yaml import yaml_generator\n",
- "from mirage.reference_files import downloader\n",
- "\n",
- "from astropy.table import Table\n",
- "from astropy.io import fits\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "%matplotlib inline\n",
- "import matplotlib.pyplot as plt"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Setting things up"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After activating the environment with MIRAGE and beginning a Jupyter Notebook session, we begin by defining the working directory"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "path='./' # write here your working directory\n",
- "\n",
- "os.chdir(path)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pwd"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Developer Note:*\n",
- "Find out a way to install the mirage data for the testing CI. Right now the data size is too"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Mirage is accompanied by a set of reference files that are used to construct the simulated data. Here we define the location of the MIRAGE data. This is the directory that contains the reference files associated with MIRAGE. \n",
- "For users at STScI, this is the location of MIRAGE data:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "os.environ['MIRAGE_DATA'] = './mirage_data/'\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Download reference files. This will take a long time. You will need around ~100 GB of space.\n",
- "\n",
- "If the user is outside of STScI then the reference files must be downloaded using the \"downloader\" module. Please follow the instructions in https://mirage-data-simulator.readthedocs.io/en/latest/reference_files.html and create an appropriate MIRAGE_DATA location. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "download_path = './'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "downloader.download_reffiles(download_path, instrument='FGS', dark_type='linearized', skip_darks=False, single_dark=True, skip_cosmic_rays=False, skip_psfs=False)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "downloader.download_reffiles(download_path, instrument='NIRCam', dark_type='linearized', skip_darks=False, single_dark=True, skip_cosmic_rays=False, skip_psfs=False)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Download Data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "boxlink = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/preimaging_notebooks/preimaging.zip'\n",
- "boxfile = './preimaging.zip'\n",
- "\n",
- "# Download zip file\n",
- "if not os.path.exists(boxfile):\n",
- " urllib.request.urlretrieve(boxlink, boxfile)\n",
- " \n",
- " zf = zipfile.ZipFile(boxfile, 'r')\n",
- " zf.extractall()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Generating input yaml files\n",
- "\n",
- "We begin the simulation using the programme's APT file. The xml and pointings files must be exported from APT, and are then used as input to the yaml_generator, which will generate a series of yaml input files.\n",
- "\n",
- "From APT we export two files: the xml and pointing files. These should be in the working directory.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Specify the xml and pointing files exported from APT\n",
- "xml_file = os.path.join('preimaging', 'NRC21_pid1069_2018_rev2.xml')\n",
- "pointing_file = os.path.join('preimaging', 'NRC21_pid1069_2018_rev2.pointing')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Additional optional data to be included."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Optionally set the telescope roll angle (PAV3) for the observations\n",
- "pav3=0.0\n",
- "\n",
- "# Define the output directory\n",
- "output_dir = path"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "In this example we create NIRCam images based on a catalogue (all_filters_lmc.cat) of point sources. This catalogue contains the AB magnitude of each source in the following six filters: F070W, F150W, F200W, F277W, F356W, and F444W. \n",
- "\n",
- "The dictionary of catalogs must use the APT target names as keys, for example `LMC-ASTROMETRIC-FIELD`. Full details on yaml_generator input options are given here: https://mirage-data-simulator.readthedocs.io/en/latest/yaml_generator.html\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This is what the input catalogue looks like. Space separated values with an uncommented header line. \n",
- "\n",
- "``` \n",
- "# position_RA_Dec\n",
- "# abmag\n",
- "# \n",
- "# \n",
- "index x_or_RA y_or_Dec nircam_f070w_magnitude nircam_f150w_magnitude nircam_f200w_magnitude nircam_f277w_magnitude nircam_f356w_magnitude nircam_f444w_magnitude\n",
- "1 80.386396453731 -69.468909240644 21.63889 21.59946 21.93288 22.51786 22.99632 23.4255\n",
- "2 80.385587687224 -69.469200540277 20.42033 20.05396 20.32926 20.92191 21.37946 21.83321\n",
- "3 80.38036547567 -69.470930464875 21.8158 21.86888 22.2175 22.8008 23.28381 23.7064\n",
- "4 80.388130492656 -69.468453170293 21.11582 20.8028 21.08802 21.67932 22.14077 22.59048\n",
- "5 80.388935773363 -69.468195831029 21.76617 21.80178 22.14757 22.73117 23.21336 23.63717\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "For more information look at the following link \n",
- "\n",
- "https://github.com/spacetelescope/mirage/blob/master/examples/Catalog_Generation_Tools.ipynb"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Source catalogs to be used\n",
- "cat_dict = { 'LMC-ASTROMETRIC-FIELD': {'nircam': {'point_source': 'preimaging/all_filters_lmc.cat'} ,\n",
- " 'fgs': {'point_source': 'dummy.cat'} } ,\n",
- " '2 LMC-ASTROMETRIC-FIELD': {'nircam': {'point_source': 'preimaging/all_filters_lmc.cat'} ,\n",
- " 'fgs': {'point_source': 'dummy.cat'} } }"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Running the yaml_generator\n",
- "This will create a collection of yaml files that will be used as input when creating the simulated data. There will be one yaml file for each detector and exposure, so there can be quite a few files created if your programme has lots of exposures or dithers. This LMC programme will generate 528 files using six NIRCam filters and the JWST FGS. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Run the yaml generator\n",
- "\n",
- "yam = yaml_generator.SimInput(xml_file, pointing_file, \n",
- " catalogs=cat_dict, \n",
- " verbose=True,\n",
- " simdata_output_dir=output_dir,\n",
- " output_dir=output_dir,\n",
- " roll_angle=pav3, \n",
- " # to do : explain linear vs raw\n",
- " datatype='linear,raw') \n",
- "\n",
- "yam.use_linearized_darks = True\n",
- "yam.create_inputs()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Organizing files according to filter"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "These notebooks will generate a large amount of data and it is useful to keep it organized in sub directories.\n",
- "\n",
- "yaml: all the yaml files organized according to filter\n",
- "mirage_output: linear and uncal files\n",
- "pipeline_level1: rate files\n",
- "pipeline_level2: cal files "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "path = os.getcwd()\n",
- "files = glob('jw*yaml')\n",
- "allfiles = glob('jw*')\n",
- "\n",
- "if not os.path.exists(os.path.join(path,'mirage_output')):\n",
- " os.mkdir(os.path.join(path,'mirage_output'))\n",
- " \n",
- "if not os.path.exists(os.path.join(path,'pipeline_level1')):\n",
- " os.mkdir(os.path.join(path,'pipeline_level1'))\n",
- " \n",
- "if not os.path.exists(os.path.join(path,'pipeline_level2')):\n",
- " os.mkdir(os.path.join(path,'pipeline_level2'))\n",
- " \n",
- "if not os.path.exists(os.path.join(path,'yaml')):\n",
- " os.mkdir(os.path.join(path,'yaml'))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here we store the yaml files in the yaml directory organized according to filter. The cell below will fail if the files have already been relocated before. If you want to intentionally re-do this step, please manually remove the previous files from the output directory."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# we organize files according to filter\n",
- "for yamlfile in files:\n",
- "\n",
- " with open(yamlfile, 'r') as stream: #open the yaml file in read mode\n",
- " doc = yaml.load(stream, Loader=yaml.FullLoader)\n",
- " \n",
- " filtname = doc['Readout']['filter'] #read the filter keyword\n",
- " if not os.path.exists(os.path.join(path,'yaml',filtname.lower())):\n",
- " os.mkdir(os.path.join(path,'yaml',filtname.lower()))\n",
- " \n",
- " filetomove = yamlfile \n",
- " input_file = filetomove\n",
- " output_file = os.path.join(path,'yaml',filtname.lower()) \n",
- " \n",
- " print('input = ',input_file)\n",
- " print('output = ',output_file)\n",
- " \n",
- " shutil.move(input_file, output_file) #move the file to the corresponding sub directory\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Execute MIRAGE and create simulated data\n",
- "\n",
- "Now that the yaml files have been generated, we can execute MIRAGE using them as input parameters and generate the NIRCam images."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "As an example, let us choose filter F150W. We are going to simulate all of the images that were observed using filter F150W. The variable \"listname\" contains the names of the yaml files that we want to process through MIRAGE. There are 128 F150W yaml files. \n",
- "\n",
- "### This step will take a long time to run. To decrease the run-time, we will only process Exposure 0004. You can change the filter to process all files if desired."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# input parameters\n",
- "\n",
- "filtname = 'f150w'\n",
- "\n",
- "cwd = os.getcwd()\n",
- "filter_pattern = os.path.join(cwd,'yaml',filtname.lower(),'jw01069001001*0004*yaml') \n",
- "files = glob(filter_pattern)[:]\n",
- "listname = files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# copy the F150W yaml files back in the working directory\n",
- "for yamlfile in files:\n",
- " input_file = yamlfile \n",
- " output_file = cwd \n",
- " print('input = ',input_file)\n",
- " print('output = ',output_file)\n",
- " shutil.copy(input_file, output_file) #this copies over filter files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# read the list of yaml files to process\n",
- "t = Table.read(listname, format='ascii.fast_no_header')\n",
- "input_yaml = t['col1']\n",
- "\n",
- "yaml_list = []\n",
- "for k in range(len(input_yaml)):\n",
- " yaml_list.append(input_yaml[k])\n",
- "\n",
- "print(yaml_list)\n",
- "\n",
- "files = yaml_list\n",
- "paramlist = yaml_list\n",
- "print(files)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "From each yaml file, Mirage will produce a noiseless seed image, a \"raw\" [(level 1b) file](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html?highlight=uncal#uncalibrated-raw-data-uncal), and a linearized ramp (equivalent to the output of the linearity correction step of the [calwebb_detector1 pipeline](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_detector1.html))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "for yamlfile in files:\n",
- " print('---------------------PROCESSING: ',yamlfile,' -------------------------------')\n",
- " \n",
- " # run Mirage\n",
- " sim = ImgSim()\n",
- " sim.paramfile = yamlfile\n",
- " sim.create()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "\n",
- "## Examine the output\n",
- "Here we display the output files generated by MIRAGE. The UNCAL file is the raw uncalibrated file. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Seed image\n",
- "The seed image contains only the signal from the astronomical sources and background. There are no detector effects, nor cosmic rays added to this count rate image.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def show(array,title,min=0,max=1000):\n",
- " plt.figure(figsize=(12,12))\n",
- " plt.imshow(array,clim=(min,max))\n",
- " plt.title(title)\n",
- " plt.colorbar().set_label('DN$^{-}$/s')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "seed_file = 'jw01069001001_01101_00004_nrcb4_uncal_F150W_CLEAR_final_seed_image.fits'\n",
- "\n",
- "with fits.open(seed_file) as hdulist:\n",
- " seed_data = hdulist[1].data\n",
- "print(seed_data.shape)\n",
- "show(seed_data,'Seed Image',max=5)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Linear file example\n",
- "MIRAGE generates the linear and uncalibrated files. Here we display an example linear file. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "linear_file = 'jw01069001001_01101_00004_nrcb4_linear.fits'\n",
- "with fits.open(linear_file) as hdulist:\n",
- " linear_data = hdulist['SCI'].data\n",
- "print(linear_data.shape)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# this image has five groups\n",
- "# we display the last group\n",
- "show(linear_data[0, 4, :, :], \"Final Group linear file\", max=250)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Raw uncalibrated file example\n",
- "First let us display a single group, which is dominated by noise and detector artifacts."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "raw_file = 'jw01069001001_01101_00004_nrcb4_uncal.fits'\n",
- "with fits.open(raw_file) as hdulist:\n",
- " raw_data = hdulist['SCI'].data\n",
- "print(raw_data.shape)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# the image has five groups. Here we display the last group\n",
- "show(raw_data[0, 4, :, :], \"Final Group uncal file\", max=15000)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Many of the instrumental artifacts can be removed by looking at the difference between two groups. Raw data values are integers, so first make the data floats before doing the subtraction."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "show(1. * raw_data[0, 4, :, :] - 1. * raw_data[0, 0, :, :], \"Last Minus First Group uncal file\", max=200)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.8.10"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/preimaging/preimaging_02_calwebb.ipynb b/notebooks/preimaging/preimaging_02_calwebb.ipynb
deleted file mode 100644
index 09dcc5e3c..000000000
--- a/notebooks/preimaging/preimaging_02_calwebb.ipynb
+++ /dev/null
@@ -1,1010 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# NIRCam Preimaging: Pipeline Stages 1, 2\n",
- "\n",
- "**Use case:** Running JWST Pipeline on NIRCam Preimaging Simulations.
\n",
- "**Data:** JWST simulated NIRCam data from MIRAGE; LMC.
\n",
- "**Tools:** jwst.
\n",
- "**Cross-intrument:** NIRCam.
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
\n",
- "\n",
- "## Introduction\n",
- "\n",
- "This Notebook must be executed from an environment that has the JWST pipeline installed. In this tutorial we show step by step instructions on how to run the JWST pipeline on the simulated images created using the previous notebook.\n",
- "\n",
- "This notebook executes the first two steps of the NIRCam pipeline on the images that were created with the simulator MIRAGE in `preimaging_01_mirage`. The example shows how to execute the pipeline on the F150W images only. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Setting things up"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "\n",
- "#os.environ['CRDS_PATH'] = os.environ['HOME']+'/crds_cache'\n",
- "os.environ['CRDS_PATH'] = '/ifs/jwst/wit/miri/ofox/preimaging/cache'\n",
- "\n",
- "os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds-pub.stsci.edu'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {},
- "outputs": [],
- "source": [
- "from sys import exit\n",
- "import json\n",
- "from glob import glob\n",
- "\n",
- "import numpy as np\n",
- "\n",
- "from jwst.pipeline import Detector1Pipeline\n",
- "from jwst.pipeline import Image2Pipeline\n",
- "from jwst.pipeline import calwebb_image2\n",
- "\n",
- "from astropy.table import Table\n",
- "from astropy.io import fits\n",
- "\n"
- ]
- },
- {
- "cell_type": "raw",
- "metadata": {},
- "source": [
- "We begin by defining the working directory"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging\n"
- ]
- }
- ],
- "source": [
- "path='./.' # this is the working directory\n",
- "os.chdir(path)\n",
- "print(os.getcwd())"
- ]
- },
- {
- "cell_type": "raw",
- "metadata": {},
- "source": [
- "Additional data to be included"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {},
- "outputs": [],
- "source": [
- "filtname = 'f150w'"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "metadata": {},
- "outputs": [],
- "source": [
- "# gather the F150W yaml files back in the working directory\n",
- "cwd = os.getcwd()\n",
- "filter_pattern = os.path.join(cwd, 'yaml', filtname.lower(), 'jw01069001001*0004*yaml') \n",
- "files = glob(filter_pattern)[:]\n",
- "inputyaml = files\n",
- "namelist = []\n",
- "outlist = []"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We define the calwebb_detector1 configuration file to use in the pipeline call."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "metadata": {},
- "outputs": [],
- "source": [
- "calwebb_detector1_file = os.path.join('preimaging', 'calwebb_detector1.cfg')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Executing calwebb_detector1"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We select the *_uncal.fits files and execute calwebb_detector1 to generate the *_rate.fits files."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "for input_yaml_path in inputyaml:\n",
- " print(input_yaml_path) \n",
- " \n",
- " base = os.path.basename(input_yaml_path)\n",
- " name = os.path.splitext(base)[0]\n",
- "\n",
- " exi = os.path.exists(name + '_rate.fits')\n",
- " if exi:\n",
- " continue\n",
- " \n",
- " namelist.append(name + '_uncal.fits')\n",
- " outlist.append(name + '.fits')\n",
- " \n",
- "print(namelist)\n",
- "print(outlist)\n",
- "\n",
- "files = namelist \n",
- "outfile = outlist \n",
- "\n",
- "\n",
- "for k, filename in enumerate(files):\n",
- " Detector1Pipeline.call(filename, output_file=outfile[k]);\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Executing calwebb_image2"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {},
- "outputs": [],
- "source": [
- "# gather the F150W yaml files back in the working directory\n",
- "cwd = os.getcwd()\n",
- "filter_pattern = os.path.join(cwd, 'yaml', filtname.lower(), 'jw01069001001*0004*') \n",
- "files = glob(filter_pattern)[:] # Use [0:2] to test the first two files.\n",
- "inputyaml = files\n",
- "namelist = []\n",
- "outlist = []"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We select the *_rate.fits files and execute calwebb_image2 to generate the *_cal.fits files."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrca3.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrca2.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrcb1.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrcb4.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrcb2.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrca4.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrca1.yaml\n",
- "/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/yaml/f150w/jw01069001001_01101_00004_nrcb3.yaml\n",
- "['jw01069001001_01101_00004_nrca2_rate.fits', 'jw01069001001_01101_00004_nrcb1_rate.fits', 'jw01069001001_01101_00004_nrcb4_rate.fits', 'jw01069001001_01101_00004_nrcb2_rate.fits', 'jw01069001001_01101_00004_nrca4_rate.fits', 'jw01069001001_01101_00004_nrca1_rate.fits', 'jw01069001001_01101_00004_nrcb3_rate.fits']\n"
- ]
- }
- ],
- "source": [
- "for input_yaml_path in inputyaml:\n",
- " print(input_yaml_path)\n",
- " base = os.path.basename(input_yaml_path)\n",
- " name = os.path.splitext(base)[0]\n",
- " \n",
- " exi = os.path.exists(name + '_cal.fits')\n",
- " if exi:\n",
- " continue\n",
- "\n",
- " namelist.append(name + '_rate.fits')\n",
- " outlist.append(name + '.fits')\n",
- " \n",
- "print(namelist)\n",
- " \n",
- "files = namelist \n",
- "outfile = outlist "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Generate the _cal.fits files."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:21:40,349 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:21:40,350 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:21:40,352 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:21:40,354 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:21:40,356 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:21:40,358 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:21:40,430 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrca2_rate.fits',).\n",
- "2022-06-24 08:21:40,438 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrca2.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrca2_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:21:42,448 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrca2_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:21:43,894 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0010.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:21:47,682 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0140.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:21:47,769 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0357.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:21:57,805 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0088.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:21:57,901 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0010.fits'.\n",
- "2022-06-24 08:21:57,907 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,908 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,908 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,909 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,909 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0140.asdf'.\n",
- "2022-06-24 08:21:57,916 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:21:57,922 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,923 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0004.asdf'.\n",
- "2022-06-24 08:21:57,930 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0357.fits'.\n",
- "2022-06-24 08:21:57,933 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,934 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,935 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,935 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,936 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,936 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,937 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,937 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0088.fits'.\n",
- "2022-06-24 08:21:57,941 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,942 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,943 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,944 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,944 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:21:57,945 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:21:57,946 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrca2\n",
- "2022-06-24 08:21:57,947 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrca2_rate.fits ...\n",
- "2022-06-24 08:21:58,068 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:21:58,070 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:22:07,120 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.651326436 -69.488635063 80.650774539 -69.471103596 80.601149166 -69.471194130 80.601120488 -69.488769518\n",
- "2022-06-24 08:22:07,120 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.651326436 -69.488635063 80.650774539 -69.471103596 80.601149166 -69.471194130 80.601120488 -69.488769518\n",
- "2022-06-24 08:22:07,121 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:22:07,208 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:22:07,290 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:22:07,291 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:22:07,749 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:22:07,828 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:22:07,830 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:22:07,893 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0088.fits\n",
- "2022-06-24 08:22:07,893 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0010.fits\n",
- "2022-06-24 08:22:08,011 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:22:08,012 - stpipe.Image2Pipeline.photom - INFO - detector: NRCA2\n",
- "2022-06-24 08:22:08,013 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:22:08,013 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:22:08,014 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:22:08,132 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:22:08,136 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.45532\n",
- "2022-06-24 08:22:08,168 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:22:08,274 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:22:08,276 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:22:08,324 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:22:08,549 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:22:10,716 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:22:12,123 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:22:14,215 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:22:15,758 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:22:17,853 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:22:19,340 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:22:21,355 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:22:22,815 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.651339540 -69.488680329 80.651052643 -69.471106918 80.600891219 -69.471200331 80.601136994 -69.488773818\n",
- "2022-06-24 08:22:30,871 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrca2_i2d.fits\n",
- "2022-06-24 08:22:30,871 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:22:30,872 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrca2\n",
- "2022-06-24 08:22:30,874 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:22:30,875 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:22:38,939 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrca2_cal.fits\n",
- "2022-06-24 08:22:38,940 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:22:38,952 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:22:38,954 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:22:38,956 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:22:38,957 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:22:38,959 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:22:38,961 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:22:39,052 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrcb1_rate.fits',).\n",
- "2022-06-24 08:22:39,060 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrcb1.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrcb1_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:22:39,623 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrcb1_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:22:39,646 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0021.fits 16.8 M bytes (1 / 5 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:22:43,712 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0136.asdf 10.0 K bytes (2 / 5 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:22:43,787 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0003.asdf 11.4 K bytes (3 / 5 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:22:43,901 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0368.fits 50.4 M bytes (4 / 5 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:22:54,688 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0090.fits 11.5 K bytes (5 / 5 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:22:54,785 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0021.fits'.\n",
- "2022-06-24 08:22:54,792 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,793 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,794 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,795 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,796 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0136.asdf'.\n",
- "2022-06-24 08:22:54,806 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:22:54,809 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,810 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0003.asdf'.\n",
- "2022-06-24 08:22:54,816 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0368.fits'.\n",
- "2022-06-24 08:22:54,820 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,821 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,821 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,822 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,822 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,823 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,823 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,824 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0090.fits'.\n",
- "2022-06-24 08:22:54,827 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,828 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,828 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,829 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,829 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:22:54,830 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:22:54,831 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrcb1\n",
- "2022-06-24 08:22:54,831 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrcb1_rate.fits ...\n",
- "2022-06-24 08:22:54,970 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:22:54,972 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:23:02,048 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.458950844 -69.488158035 80.458986971 -69.470585968 80.409373481 -69.470473194 80.408761836 -69.488000370\n",
- "2022-06-24 08:23:02,049 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.458950844 -69.488158035 80.458986971 -69.470585968 80.409373481 -69.470473194 80.408761836 -69.488000370\n",
- "2022-06-24 08:23:02,050 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:23:02,129 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:23:02,208 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:23:02,209 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:23:02,637 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:23:02,738 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:23:02,740 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:23:02,795 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0090.fits\n",
- "2022-06-24 08:23:02,795 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0021.fits\n",
- "2022-06-24 08:23:02,896 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:23:02,897 - stpipe.Image2Pipeline.photom - INFO - detector: NRCB1\n",
- "2022-06-24 08:23:02,897 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:23:02,897 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:23:02,898 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:23:02,989 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:23:02,993 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.4642\n",
- "2022-06-24 08:23:03,022 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:23:03,123 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n",
- "2022-06-24 08:23:03,125 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:23:03,177 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:23:03,359 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:23:05,435 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:23:06,969 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:23:08,911 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:23:10,392 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:23:12,454 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:23:13,887 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:23:15,874 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2055, 2057)\n",
- "2022-06-24 08:23:17,400 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.458963080 -69.488162365 80.459271379 -69.470592999 80.409123189 -69.470477658 80.408773790 -69.488046929\n",
- "2022-06-24 08:23:25,022 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrcb1_i2d.fits\n",
- "2022-06-24 08:23:25,023 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:23:25,023 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrcb1\n",
- "2022-06-24 08:23:25,025 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:23:25,025 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:23:32,561 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrcb1_cal.fits\n",
- "2022-06-24 08:23:32,562 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:23:32,575 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:23:32,577 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:23:32,579 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:23:32,581 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:23:32,583 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:23:32,585 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:23:32,681 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrcb4_rate.fits',).\n",
- "2022-06-24 08:23:32,689 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrcb4.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrcb4_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:23:33,403 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrcb4_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:23:33,426 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0020.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:23:37,319 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0139.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:23:37,416 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0375.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:23:48,547 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0096.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:23:48,631 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0020.fits'.\n",
- "2022-06-24 08:23:48,638 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,638 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,639 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,639 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,640 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0139.asdf'.\n",
- "2022-06-24 08:23:48,646 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:23:48,652 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,652 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0003.asdf'.\n",
- "2022-06-24 08:23:48,661 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0375.fits'.\n",
- "2022-06-24 08:23:48,665 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,665 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,666 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,666 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,667 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,667 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,668 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,668 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0096.fits'.\n",
- "2022-06-24 08:23:48,672 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,672 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,673 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,673 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,674 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:23:48,674 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:23:48,675 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrcb4\n",
- "2022-06-24 08:23:48,675 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrcb4_rate.fits ...\n",
- "2022-06-24 08:23:48,814 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:23:48,816 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:23:55,858 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.513728220 -69.507173120 80.513058046 -69.489227347 80.462589813 -69.489300363 80.462518350 -69.507151583\n",
- "2022-06-24 08:23:55,859 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.513728220 -69.507173120 80.513058046 -69.489227347 80.462589813 -69.489300363 80.462518350 -69.507151583\n",
- "2022-06-24 08:23:55,859 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:23:55,934 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:23:56,014 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:23:56,015 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:23:56,428 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:23:56,516 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:23:56,518 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:23:56,571 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0096.fits\n",
- "2022-06-24 08:23:56,572 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0020.fits\n",
- "2022-06-24 08:23:56,666 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:23:56,667 - stpipe.Image2Pipeline.photom - INFO - detector: NRCB4\n",
- "2022-06-24 08:23:56,667 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:23:56,667 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:23:56,668 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:23:56,754 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:23:56,757 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.37397\n",
- "2022-06-24 08:23:56,782 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:23:56,875 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:23:56,878 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:23:56,918 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:23:57,083 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:23:59,153 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2064, 2058)\n",
- "2022-06-24 08:24:00,605 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:24:02,727 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2064, 2058)\n",
- "2022-06-24 08:24:04,190 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:24:06,170 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2064, 2058)\n",
- "2022-06-24 08:24:07,596 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:24:09,600 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2064, 2058)\n",
- "2022-06-24 08:24:11,081 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.513740955 -69.507177535 80.513425235 -69.489198952 80.462263053 -69.489301664 80.462535822 -69.507280333\n",
- "2022-06-24 08:24:18,308 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrcb4_i2d.fits\n",
- "2022-06-24 08:24:18,309 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:24:18,310 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrcb4\n",
- "2022-06-24 08:24:18,312 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:24:18,313 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:24:25,665 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrcb4_cal.fits\n",
- "2022-06-24 08:24:25,666 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:24:25,678 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:24:25,680 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:24:25,682 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:24:25,683 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:24:25,684 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:24:25,686 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:24:25,787 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrcb2_rate.fits',).\n",
- "2022-06-24 08:24:25,795 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrcb2.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrcb2_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:24:26,485 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrcb2_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:24:26,507 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0029.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:24:30,433 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0132.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:24:30,529 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0316.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:24:41,664 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0098.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:24:41,743 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0029.fits'.\n",
- "2022-06-24 08:24:41,750 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,751 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,751 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,752 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,752 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0132.asdf'.\n",
- "2022-06-24 08:24:41,758 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:24:41,764 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,765 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0003.asdf'.\n",
- "2022-06-24 08:24:41,772 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0316.fits'.\n",
- "2022-06-24 08:24:41,776 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,777 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,777 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,778 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,778 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,779 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,779 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,780 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0098.fits'.\n",
- "2022-06-24 08:24:41,784 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,784 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,785 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,785 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,786 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:24:41,787 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:24:41,787 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrcb2\n",
- "2022-06-24 08:24:41,788 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrcb2_rate.fits ...\n",
- "2022-06-24 08:24:41,926 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:24:41,928 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:24:49,626 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.458815788 -69.507195950 80.459265459 -69.489349575 80.409036988 -69.489105050 80.407858838 -69.506900294\n",
- "2022-06-24 08:24:49,627 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.458815788 -69.507195950 80.459265459 -69.489349575 80.409036988 -69.489105050 80.407858838 -69.506900294\n",
- "2022-06-24 08:24:49,627 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:24:49,698 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:24:49,775 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:24:49,776 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:24:50,226 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:24:50,326 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:24:50,328 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:24:50,379 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0098.fits\n",
- "2022-06-24 08:24:50,380 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0029.fits\n",
- "2022-06-24 08:24:50,482 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:24:50,483 - stpipe.Image2Pipeline.photom - INFO - detector: NRCB2\n",
- "2022-06-24 08:24:50,483 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:24:50,484 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:24:50,484 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:24:50,575 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:24:50,578 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.3953\n",
- "2022-06-24 08:24:50,605 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:24:50,713 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:24:50,716 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:24:50,760 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:24:50,926 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:24:52,943 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2057)\n",
- "2022-06-24 08:24:54,439 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:24:56,576 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2057)\n",
- "2022-06-24 08:24:58,034 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:24:59,969 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2057)\n",
- "2022-06-24 08:25:01,363 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:25:03,358 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2057)\n",
- "2022-06-24 08:25:04,814 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.458828108 -69.507200405 80.459617440 -69.489355957 80.408714207 -69.489072333 80.407882470 -69.506916545\n",
- "2022-06-24 08:25:12,743 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrcb2_i2d.fits\n",
- "2022-06-24 08:25:12,745 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:25:12,746 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrcb2\n",
- "2022-06-24 08:25:12,749 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:25:12,750 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:25:20,448 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrcb2_cal.fits\n",
- "2022-06-24 08:25:20,448 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:25:20,469 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:25:20,470 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:25:20,472 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:25:20,474 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:25:20,476 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:25:20,478 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:25:20,581 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrca4_rate.fits',).\n",
- "2022-06-24 08:25:20,588 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrca4.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrca4_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:25:21,409 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrca4_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:25:21,434 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0022.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:25:25,337 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0124.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:25:25,445 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0351.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:25:36,267 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0091.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:25:36,354 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0022.fits'.\n",
- "2022-06-24 08:25:36,361 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,361 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,362 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,362 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,363 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0124.asdf'.\n",
- "2022-06-24 08:25:36,370 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:25:36,374 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,374 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0004.asdf'.\n",
- "2022-06-24 08:25:36,381 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0351.fits'.\n",
- "2022-06-24 08:25:36,384 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,385 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,385 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,386 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,387 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,387 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,387 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,388 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0091.fits'.\n",
- "2022-06-24 08:25:36,391 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,392 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,392 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,392 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,393 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:25:36,394 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:25:36,394 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrca4\n",
- "2022-06-24 08:25:36,395 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrca4_rate.fits ...\n",
- "2022-06-24 08:25:36,540 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:25:36,543 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:25:44,077 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.597838818 -69.488771886 80.597576722 -69.471192287 80.547726372 -69.471147414 80.547402619 -69.488819764\n",
- "2022-06-24 08:25:44,078 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.597838818 -69.488771886 80.597576722 -69.471192287 80.547726372 -69.471147414 80.547402619 -69.488819764\n",
- "2022-06-24 08:25:44,078 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:25:44,151 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:25:44,233 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:25:44,235 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:25:44,639 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:25:44,739 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:25:44,741 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:25:44,798 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0091.fits\n",
- "2022-06-24 08:25:44,798 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0022.fits\n",
- "2022-06-24 08:25:44,897 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:25:44,897 - stpipe.Image2Pipeline.photom - INFO - detector: NRCA4\n",
- "2022-06-24 08:25:44,898 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:25:44,898 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:25:44,899 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:25:44,982 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:25:44,985 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.43432\n",
- "2022-06-24 08:25:45,010 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:25:45,114 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:25:45,117 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:25:45,171 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:25:45,325 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:25:47,308 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:25:48,767 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:25:50,695 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:25:52,060 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:25:53,951 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:25:55,311 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:25:57,189 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:25:58,628 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.597851129 -69.488835934 80.597864060 -69.471160823 80.547461516 -69.471149011 80.547407026 -69.488824113\n",
- "2022-06-24 08:26:05,938 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrca4_i2d.fits\n",
- "2022-06-24 08:26:05,938 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:26:05,939 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrca4\n",
- "2022-06-24 08:26:05,941 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:26:05,942 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:26:13,517 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrca4_cal.fits\n",
- "2022-06-24 08:26:13,518 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:26:13,530 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:26:13,532 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:26:13,534 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:26:13,535 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:26:13,537 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:26:13,539 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:26:13,640 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrca1_rate.fits',).\n",
- "2022-06-24 08:26:13,648 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrca1.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrca1_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:26:14,349 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrca1_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:26:14,386 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0017.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:26:18,721 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0127.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:26:18,814 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0347.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:26:30,557 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0089.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:26:30,657 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0017.fits'.\n",
- "2022-06-24 08:26:30,664 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,665 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,666 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,666 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,667 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0127.asdf'.\n",
- "2022-06-24 08:26:30,675 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:26:30,681 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,682 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0004.asdf'.\n",
- "2022-06-24 08:26:30,685 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0347.fits'.\n",
- "2022-06-24 08:26:30,689 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,689 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,690 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,690 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,691 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,691 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,692 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,692 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0089.fits'.\n",
- "2022-06-24 08:26:30,695 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,696 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,696 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,697 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,697 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:26:30,698 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:26:30,698 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrca1\n",
- "2022-06-24 08:26:30,699 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrca1_rate.fits ...\n",
- "2022-06-24 08:26:30,841 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:26:30,843 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:26:37,959 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.652645551 -69.507566286 80.651678369 -69.489759755 80.601431248 -69.489927193 80.601662505 -69.507780864\n",
- "2022-06-24 08:26:37,960 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.652645551 -69.507566286 80.651678369 -69.489759755 80.601431248 -69.489927193 80.601662505 -69.507780864\n",
- "2022-06-24 08:26:37,960 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:26:38,038 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:26:38,125 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:26:38,127 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:26:38,545 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:26:38,647 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:26:38,649 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:26:38,696 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0089.fits\n",
- "2022-06-24 08:26:38,697 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0017.fits\n",
- "2022-06-24 08:26:38,797 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:26:38,797 - stpipe.Image2Pipeline.photom - INFO - detector: NRCA1\n",
- "2022-06-24 08:26:38,798 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:26:38,798 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:26:38,799 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:26:39,125 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:26:39,128 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.38591\n",
- "2022-06-24 08:26:39,153 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:26:39,257 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:26:39,259 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:26:39,298 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:26:39,466 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:26:41,403 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:26:42,808 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:26:44,865 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:26:46,313 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:26:48,331 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:26:49,767 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:26:51,744 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:26:53,194 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.652658501 -69.507576813 80.652042292 -69.489724653 80.601090258 -69.489932942 80.601663997 -69.507785276\n",
- "2022-06-24 08:27:00,869 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrca1_i2d.fits\n",
- "2022-06-24 08:27:00,870 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:27:00,871 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrca1\n",
- "2022-06-24 08:27:00,872 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:27:00,873 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:27:08,817 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrca1_cal.fits\n",
- "2022-06-24 08:27:08,818 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n",
- "2022-06-24 08:27:08,831 - stpipe.Image2Pipeline - INFO - Image2Pipeline instance created.\n",
- "2022-06-24 08:27:08,833 - stpipe.Image2Pipeline.bkg_subtract - INFO - BackgroundStep instance created.\n",
- "2022-06-24 08:27:08,835 - stpipe.Image2Pipeline.assign_wcs - INFO - AssignWcsStep instance created.\n",
- "2022-06-24 08:27:08,836 - stpipe.Image2Pipeline.flat_field - INFO - FlatFieldStep instance created.\n",
- "2022-06-24 08:27:08,838 - stpipe.Image2Pipeline.photom - INFO - PhotomStep instance created.\n",
- "2022-06-24 08:27:08,840 - stpipe.Image2Pipeline.resample - INFO - ResampleStep instance created.\n",
- "2022-06-24 08:27:08,936 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline running with args ('jw01069001001_01101_00004_nrcb3_rate.fits',).\n",
- "2022-06-24 08:27:08,943 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': '/System/Volumes/Data/ifs/jwst/wit/miri/ofox/preimaging/jw01069001001_01101_00004_nrcb3.fits', 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_bsub': False, 'steps': {'bkg_subtract': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_combined_background': False, 'sigma': 3.0, 'maxiters': None, 'wfss_mmag_extract': None}, 'assign_wcs': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}, 'flat_field': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}, 'photom': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}, 'resample': {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}}}\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "------------------------------------------------------------\n",
- "jw01069001001_01101_00004_nrcb3_rate.fits\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:27:09,662 - stpipe.Image2Pipeline - INFO - Prefetching reference files for dataset: 'jw01069001001_01101_00004_nrcb3_rate.fits' reftypes = ['area', 'camera', 'collimator', 'dflat', 'disperser', 'distortion', 'drizpars', 'fflat', 'filteroffset', 'flat', 'fore', 'fpa', 'ifufore', 'ifupost', 'ifuslicer', 'msa', 'ote', 'photom', 'regions', 'sflat', 'specwcs', 'wavelengthrange', 'wfssbkg']\n",
- "2022-06-24 08:27:09,684 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0023.fits 16.8 M bytes (1 / 4 files) (0 / 67.2 M bytes)\n",
- "2022-06-24 08:27:13,434 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0128.asdf 10.0 K bytes (2 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:27:13,565 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0371.fits 50.4 M bytes (3 / 4 files) (16.8 M / 67.2 M bytes)\n",
- "2022-06-24 08:27:24,756 - CRDS - INFO - Fetching /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0087.fits 11.5 K bytes (4 / 4 files) (67.2 M / 67.2 M bytes)\n",
- "2022-06-24 08:27:24,857 - stpipe.Image2Pipeline - INFO - Prefetch for AREA reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0023.fits'.\n",
- "2022-06-24 08:27:24,864 - stpipe.Image2Pipeline - INFO - Prefetch for CAMERA reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,865 - stpipe.Image2Pipeline - INFO - Prefetch for COLLIMATOR reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,866 - stpipe.Image2Pipeline - INFO - Prefetch for DFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,867 - stpipe.Image2Pipeline - INFO - Prefetch for DISPERSER reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,868 - stpipe.Image2Pipeline - INFO - Prefetch for DISTORTION reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_distortion_0128.asdf'.\n",
- "2022-06-24 08:27:24,875 - stpipe.Image2Pipeline - INFO - Prefetch for DRIZPARS reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits'.\n",
- "2022-06-24 08:27:24,882 - stpipe.Image2Pipeline - INFO - Prefetch for FFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,883 - stpipe.Image2Pipeline - INFO - Prefetch for FILTEROFFSET reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_filteroffset_0003.asdf'.\n",
- "2022-06-24 08:27:24,891 - stpipe.Image2Pipeline - INFO - Prefetch for FLAT reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_flat_0371.fits'.\n",
- "2022-06-24 08:27:24,902 - stpipe.Image2Pipeline - INFO - Prefetch for FORE reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,903 - stpipe.Image2Pipeline - INFO - Prefetch for FPA reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,904 - stpipe.Image2Pipeline - INFO - Prefetch for IFUFORE reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,905 - stpipe.Image2Pipeline - INFO - Prefetch for IFUPOST reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,905 - stpipe.Image2Pipeline - INFO - Prefetch for IFUSLICER reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,906 - stpipe.Image2Pipeline - INFO - Prefetch for MSA reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,907 - stpipe.Image2Pipeline - INFO - Prefetch for OTE reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,907 - stpipe.Image2Pipeline - INFO - Prefetch for PHOTOM reference file is '/ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0087.fits'.\n",
- "2022-06-24 08:27:24,911 - stpipe.Image2Pipeline - INFO - Prefetch for REGIONS reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,911 - stpipe.Image2Pipeline - INFO - Prefetch for SFLAT reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,912 - stpipe.Image2Pipeline - INFO - Prefetch for SPECWCS reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,912 - stpipe.Image2Pipeline - INFO - Prefetch for WAVELENGTHRANGE reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,913 - stpipe.Image2Pipeline - INFO - Prefetch for WFSSBKG reference file is 'N/A'.\n",
- "2022-06-24 08:27:24,914 - stpipe.Image2Pipeline - INFO - Starting calwebb_image2 ...\n",
- "2022-06-24 08:27:24,914 - stpipe.Image2Pipeline - INFO - Processing product jw01069001001_01101_00004_nrcb3\n",
- "2022-06-24 08:27:24,915 - stpipe.Image2Pipeline - INFO - Working on input jw01069001001_01101_00004_nrcb3_rate.fits ...\n",
- "2022-06-24 08:27:25,068 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs running with args (,).\n",
- "2022-06-24 08:27:25,070 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'sip_approx': True, 'sip_max_pix_error': 0.25, 'sip_degree': None, 'sip_max_inv_pix_error': 0.25, 'sip_inv_degree': None, 'sip_npoints': 32, 'slit_y_low': -0.55, 'slit_y_high': 0.55}\n",
- "2022-06-24 08:27:33,006 - stpipe.Image2Pipeline.assign_wcs - INFO - Update S_REGION to POLYGON ICRS 80.513225501 -69.488091433 80.512544173 -69.470422822 80.462700910 -69.470595929 80.462798326 -69.488173190\n",
- "2022-06-24 08:27:33,006 - stpipe.Image2Pipeline.assign_wcs - INFO - assign_wcs updated S_REGION to POLYGON ICRS 80.513225501 -69.488091433 80.512544173 -69.470422822 80.462700910 -69.470595929 80.462798326 -69.488173190\n",
- "2022-06-24 08:27:33,007 - stpipe.Image2Pipeline.assign_wcs - INFO - COMPLETED assign_wcs\n",
- "2022-06-24 08:27:33,078 - stpipe.Image2Pipeline.assign_wcs - INFO - Step assign_wcs done\n",
- "2022-06-24 08:27:33,162 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field running with args (,).\n",
- "2022-06-24 08:27:33,164 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'save_interpolated_flat': False, 'user_supplied_flat': None, 'inverse': False}\n",
- "2022-06-24 08:27:33,573 - stpipe.Image2Pipeline.flat_field - INFO - Step flat_field done\n",
- "2022-06-24 08:27:33,678 - stpipe.Image2Pipeline.photom - INFO - Step photom running with args (,).\n",
- "2022-06-24 08:27:33,680 - stpipe.Image2Pipeline.photom - INFO - Step photom parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': False, 'skip': False, 'suffix': None, 'search_output_file': True, 'input_dir': '', 'inverse': False, 'source_type': None}\n",
- "2022-06-24 08:27:33,742 - stpipe.Image2Pipeline.photom - INFO - Using photom reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_photom_0087.fits\n",
- "2022-06-24 08:27:33,742 - stpipe.Image2Pipeline.photom - INFO - Using area reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_area_0023.fits\n",
- "2022-06-24 08:27:33,840 - stpipe.Image2Pipeline.photom - INFO - Using instrument: NIRCAM\n",
- "2022-06-24 08:27:33,841 - stpipe.Image2Pipeline.photom - INFO - detector: NRCB3\n",
- "2022-06-24 08:27:33,841 - stpipe.Image2Pipeline.photom - INFO - exp_type: NRC_IMAGE\n",
- "2022-06-24 08:27:33,842 - stpipe.Image2Pipeline.photom - INFO - filter: F150W\n",
- "2022-06-24 08:27:33,842 - stpipe.Image2Pipeline.photom - INFO - pupil: CLEAR\n",
- "2022-06-24 08:27:33,938 - stpipe.Image2Pipeline.photom - INFO - Pixel area map copied to output.\n",
- "2022-06-24 08:27:33,941 - stpipe.Image2Pipeline.photom - INFO - PHOTMJSR value: 2.44264\n",
- "2022-06-24 08:27:33,966 - stpipe.Image2Pipeline.photom - INFO - Step photom done\n",
- "2022-06-24 08:27:34,079 - stpipe.Image2Pipeline.resample - INFO - Step resample running with args (,).\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "2022-06-24 08:27:34,082 - stpipe.Image2Pipeline.resample - INFO - Step resample parameters are: {'pre_hooks': [], 'post_hooks': [], 'output_file': None, 'output_dir': None, 'output_ext': '.fits', 'output_use_model': False, 'output_use_index': True, 'save_results': True, 'skip': False, 'suffix': 'i2d', 'search_output_file': True, 'input_dir': '', 'pixfrac': 1.0, 'kernel': 'square', 'fillval': 'INDEF', 'weight_type': 'ivm', 'output_shape': None, 'crpix': None, 'crval': None, 'rotation': None, 'pixel_scale_ratio': 1.0, 'pixel_scale': None, 'single': False, 'blendheaders': True, 'allowed_memory': None}\n",
- "2022-06-24 08:27:34,128 - stpipe.Image2Pipeline.resample - INFO - Drizpars reference file: /ifs/jwst/wit/miri/ofox/preimaging/cache/references/jwst/nircam/jwst_nircam_drizpars_0001.fits\n",
- "2022-06-24 08:27:34,297 - stpipe.Image2Pipeline.resample - INFO - Resampling science data\n",
- "2022-06-24 08:27:36,298 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:27:37,677 - stpipe.Image2Pipeline.resample - INFO - Resampling var_rnoise\n",
- "2022-06-24 08:27:39,661 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:27:41,079 - stpipe.Image2Pipeline.resample - INFO - Resampling var_poisson\n",
- "2022-06-24 08:27:43,049 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:27:44,407 - stpipe.Image2Pipeline.resample - INFO - Resampling var_flat\n",
- "2022-06-24 08:27:46,316 - stpipe.Image2Pipeline.resample - INFO - Drizzling (2048, 2048) --> (2058, 2058)\n",
- "2022-06-24 08:27:47,748 - stpipe.Image2Pipeline.resample - INFO - Update S_REGION to POLYGON ICRS 80.513238037 -69.488095728 80.512823431 -69.470423519 80.462430543 -69.470561520 80.462803608 -69.488233842\n",
- "2022-06-24 08:27:55,858 - stpipe.Image2Pipeline.resample - INFO - Saved model in jw01069001001_01101_00004_nrcb3_i2d.fits\n",
- "2022-06-24 08:27:55,859 - stpipe.Image2Pipeline.resample - INFO - Step resample done\n",
- "2022-06-24 08:27:55,860 - stpipe.Image2Pipeline - INFO - Finished processing product jw01069001001_01101_00004_nrcb3\n",
- "2022-06-24 08:27:55,862 - stpipe.Image2Pipeline - INFO - ... ending calwebb_image2\n",
- "2022-06-24 08:27:55,863 - stpipe.Image2Pipeline - INFO - Results used CRDS context: jwst_0878.pmap\n",
- "2022-06-24 08:28:04,750 - stpipe.Image2Pipeline - INFO - Saved model in jw01069001001_01101_00004_nrcb3_cal.fits\n",
- "2022-06-24 08:28:04,751 - stpipe.Image2Pipeline - INFO - Step Image2Pipeline done\n"
- ]
- }
- ],
- "source": [
- "for k, filename in enumerate(files):\n",
- " print('-'*60)\n",
- " print(filename)\n",
- "\n",
- " m = calwebb_image2.Image2Pipeline(output_file=outfile[k])\n",
- " m.run(filename)\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.8.10"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/preimaging/preimaging_03_association.ipynb b/notebooks/preimaging/preimaging_03_association.ipynb
deleted file mode 100644
index 8bad3cadf..000000000
--- a/notebooks/preimaging/preimaging_03_association.ipynb
+++ /dev/null
@@ -1,540 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# NIRCam Preimaging: Pipeline Stage 3\n",
- "\n",
- "\n",
- "**Use case:** Running JWST Pipeline on NIRCam Preimaging Simulations.
\n",
- "**Data:** JWST simulated NIRCam data from MIRAGE; LMC.
\n",
- "**Tools:** jwst.
\n",
- "**Cross-intrument:** NIRCam.
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
\n",
- "\n",
- "## Introduction\n",
- "\n",
- "In this Notebook we show how to create the association needed to execute calwebb_image3 and how to run calwebb_image3."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Setting things up"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "from glob import glob\n",
- "from astropy.io import fits\n",
- "from astropy import wcs\n",
- "from astropy.io import ascii\n",
- "from astropy.table import Table\n",
- "\n",
- "\n",
- "import numpy as np\n",
- "import json\n",
- "import yaml\n",
- "from sys import exit\n",
- "from shapely.geometry import Polygon \n",
- "import sys\n",
- "import shapely.ops as so\n",
- "\n",
- "from jwst.pipeline import Image3Pipeline"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "%matplotlib inline\n",
- "import matplotlib\n",
- "import matplotlib.pyplot as plt\n",
- "from matplotlib.patches import Polygon as leopolygon\n",
- "from matplotlib.collections import PatchCollection"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "path='./.' # This is the working directory.\n",
- "\n",
- "os.chdir(path)\n",
- "print(os.getcwd())"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Change parameters here\n",
- "filtname = 'f150w'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Gather the F150W *cal.fits files back in the working directory\n",
- "cwd = os.getcwd()\n",
- "filter_pattern = os.path.join(cwd, '*_cal.fits') \n",
- "files = glob(filter_pattern)[:] \n",
- "namelist = []\n",
- "outlist = []"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### This next cell only needs to be executed for this version of the notebook, which only runs the pipeline for one pointing due to file size limitations. You can comment out with more than one pointing or set of exposures."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "files = files[:]+files[:]\n",
- "files"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# We convert corner pixel coordinates to world coordinates \n",
- "coord = []\n",
- "\n",
- "for file in files:\n",
- " hdulist = fits.open(file)\n",
- " \n",
- " # Parse the WCS keywords in the primary HDU\n",
- " w = wcs.WCS(hdulist[1].header, naxis=2)\n",
- " \n",
- " # Print out the \"name\" of the WCS, as defined in the FITS header\n",
- " # print(w.wcs.name)\n",
- "\n",
- " # Three pixel coordinates of interest.\n",
- " # Note we've silently assumed a NAXIS=2 image here\n",
- " pixcrd = np.array([[0, 0], [2048, 0], [2048, 2048], [0, 2048]], np.float_)\n",
- " \n",
- " # Convert pixel coordinates to world coordinates\n",
- " world = w.wcs_pix2world(pixcrd, 0)\n",
- " #world = w.wcs_pix2world([[0, 0], [2048, 0], [2048, 2048], [0, 2048]], 0)\n",
- " coord.append(world)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# we read the input source catalogue\n",
- "pointsource_catalog = Table.read('./preimaging/pointsource_LMCcalField_F150W.cat', format='ascii')\n",
- "pointsource_ra = pointsource_catalog['x_or_RA']\n",
- "pointsource_dec = pointsource_catalog['y_or_Dec']\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# We plot the field of view of the simulated images (blue) on top of the source catalogue (red)\n",
- "\n",
- "fig = plt.figure(num=None, figsize=(12, 12), dpi=80, facecolor='w', edgecolor='k')\n",
- "ax = fig.add_subplot(111)\n",
- "\n",
- "ax.scatter(pointsource_ra, pointsource_dec, c='red', label='Catalog point sources', s=0.1)\n",
- "\n",
- "for m in range(len(files)):\n",
- " rect1 = matplotlib.patches.Polygon(coord[m], edgecolor='red', \n",
- " facecolor='blue', alpha=0.2)\n",
- " ax.add_patch(rect1)\n",
- "rect1.set_label('Detector')\n",
- "\n",
- "# These limits correspond to the LMC \n",
- "plt.xlim([80.2, 80.8]) # alpha\n",
- "plt.ylim([-69.4, -69.6]) # delta\n",
- "plt.xlabel('RA (degrees)')\n",
- "plt.ylabel('Dec (degrees)')\n",
- "plt.title('F150W Mosaic')\n",
- "#plt.axis('scaled')\n",
- "\n",
- "plt.legend()\n",
- "plt.show()\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "In order to create an association for stage 3 of the pipeline, we need to check that each image contains sources from the catalog and that the images overlap in such a way that they make a continuous mosaic.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "current_detector_idx = None\n",
- "current_detector = None\n",
- "current_detector_coords = None\n",
- "\n",
- "number_detectors = len(files)\n",
- "detectors = files[:]\n",
- "detector_coords = coord[:]\n",
- "referencia = None\n",
- "\n",
- "finallist = []\n",
- "finalcoords = []\n",
- "\n",
- "for iteration in range(number_detectors):\n",
- " \n",
- " # if this is the first iteration, initialize variables\n",
- " if iteration == 0:\n",
- " current_detector_idx = 0 # select first image\n",
- " covfile = [100]\n",
- " else:\n",
- " covfile = []\n",
- " \n",
- " # calculate the overlap with all the rest of the files\n",
- " maxcoverage = 0.0\n",
- " best_overlap_index = None\n",
- " for m in range(len(detectors)):\n",
- " detector = Polygon(detector_coords[m])\n",
- " coverage = detector.intersection(referencia).area\n",
- "\n",
- " covfile.append(round(100 * coverage / detector.area))\n",
- "\n",
- " if covfile[m] >= maxcoverage: \n",
- " maxcoverage = covfile[m]\n",
- " best_overlap_index = m\n",
- " \n",
- " if maxcoverage == 0.0:\n",
- " print('Some images are excluded')\n",
- " print('Number of excluded images: ', len(detectors))\n",
- " break\n",
- " \n",
- " # select the file with the maximum overlap\n",
- " \n",
- " # print overlap values\n",
- " # print(covfile)\n",
- " # print(best_overlap_index, covfile[best_overlap_index])\n",
- " \n",
- " # we set current index to the best overlap\n",
- " current_detector_idx = best_overlap_index\n",
- "\n",
- " # remove the image from list and its coordinates\n",
- " current_detector = detectors.pop(current_detector_idx)\n",
- " current_detector_coords = detector_coords.pop(current_detector_idx)\n",
- " \n",
- " referencia = Polygon(current_detector_coords) if iteration == 0 else referencia.union(Polygon(current_detector_coords))\n",
- " \n",
- " print('removing current detector ' + current_detector)\n",
- " finallist.append(current_detector)\n",
- " finalcoords.append(current_detector_coords)\n",
- "\n",
- "\n",
- " # Plot \n",
- " # ----\n",
- " fig = plt.figure(num=None, figsize=(12, 12), dpi=80, facecolor='w', edgecolor='k')\n",
- " ax = fig.add_subplot(111)\n",
- " \n",
- " rect1 = leopolygon([\n",
- " (80.3125, -69.4950), \n",
- " (80.4958, -69.4392), \n",
- " (80.6625, -69.5022), \n",
- " (80.4792, -69.5564)], color='yellow')\n",
- " \n",
- " ax.add_patch(rect1)\n",
- " \n",
- " rect2 = leopolygon(current_detector_coords, edgecolor='navy', \n",
- " facecolor='navy', alpha=0.8)\n",
- " \n",
- " new_shape = so.cascaded_union(referencia)\n",
- " xs, ys = new_shape.exterior.xy\n",
- " \n",
- " ax.fill(xs, ys, alpha=0.8, fc='red', ec='red')\n",
- "\n",
- " ax.add_patch(rect2)\n",
- " \n",
- " plt.xlim([80.2, 80.8]) # RA\n",
- " plt.ylim([-69.4, -69.6]) # Dec\n",
- " \n",
- " plt.title('coverage = {}% \\n {}'.format(str(covfile[current_detector_idx]), \n",
- " current_detector), fontsize=15)\n",
- " plt.xlabel('RA (degrees)')\n",
- " plt.ylabel('Dec (degrees)')\n",
- " \n",
- " pdfname = 'figure_' + str(len(finallist)-1)\n",
- " # plt.savefig(pdfname) \n",
- " \n",
- " plt.show()\n",
- " \n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "finallist"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here we need to go through the final list and calculate the overlap \n",
- "with the stellar catalogue. We select the files that have an overlap of 99% or larger.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "bestlist = []\n",
- "\n",
- "# these coordinates represent the extent of the LMC catalogue\n",
- "referencia = Polygon([(80.3125, -69.4950), \n",
- " (80.4958, -69.4392), \n",
- " (80.6625, -69.5022), \n",
- " (80.4792, -69.5564)])\n",
- "\n",
- "rejected = 0\n",
- "for current_detector, current_detector_coords in zip(finallist, finalcoords):\n",
- " detector = Polygon(current_detector_coords)\n",
- " \n",
- " coverage = detector.intersection(referencia).area\n",
- " coverage = round(100 * coverage / detector.area)\n",
- " \n",
- " if coverage >= 50: #99 \n",
- " bestlist.append(current_detector)\n",
- " else:\n",
- " rejected += 1\n",
- " \n",
- "print('number of overlapping images = ', len(bestlist))\n",
- "print('number of rejected images = ', rejected)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Save Detector List to an Association (JSON file)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Developer Note:*\n",
- "\n",
- "There is a new way to implement the association introduced in version 0.16.0 of the JWST package. It might be worthwhile to explore."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The following link provides a complete description of the association keywords.\n",
- "\n",
- "https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_technical.html#association-meta-keywords"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# here we print a json file with the science files in the correct order\n",
- "\n",
- "association = {}\n",
- " \n",
- "association[\"asn_id\"] = \"a3001\"\n",
- "association[\"asn_pool\"] = \"none\" \n",
- "association[\"asn_rule\"] = \"Asn_Image\" \n",
- "association[\"program\"] = \"1069\" \n",
- "association[\"asn_type\"] = \"image3\" \n",
- "association[\"constraints\"] = \"No constraints\" \n",
- "association[\"target\"] = \"none\" \n",
- "association[\"version_id\"] = \"null\" \n",
- "association[\"degraded_status\"] = \"No known degraded exposures in association.\"\n",
- " \n",
- "products_dict = {}\n",
- "\n",
- "products_dict[\"name\"] = \"lmc-\" + filtname \n",
- "products_dict[\"members\"] = [] \n",
- "for current_detector in bestlist:\n",
- " obs_dict = {\n",
- " \"expname\": current_detector,\n",
- " \"exptype\": \"science\"\n",
- " }\n",
- " products_dict[\"members\"].append(obs_dict) \n",
- "\n",
- "association[\"products\"] = [products_dict] "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# save association file\n",
- "jsonfile = \"association-\" + filtname + \".json\"\n",
- "with open(jsonfile, 'w') as json_file:\n",
- " json.dump(association, json_file, indent=4)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Execute the third stage of the NIRCam pipeline"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here we execute the third stage of the NIRCam pipeline. In this stage the pipeline creates a mosaic of the input images and extracts a source catalogue."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Execute pipeline\n",
- "# ----------------\n",
- "\n",
- "# run Image3 pipeline to get source catalog\n",
- "im3 = Image3Pipeline()\n",
- "\n",
- "# Options \n",
- "# -------\n",
- "\n",
- "#im3.source_catalog.kernel_fwhm = # kernel_fwhm\n",
- "#im3.source_catalog.kernel_xsize = # kernel_xsize\n",
- "#im3.source_catalog.kernel_ysize = # kernel_ysize\n",
- "#im3.source_catalog.npixels = # npixels\n",
- "#im3.tweakreg_catalog.snr_threshold = 2000.0\n",
- "#im3.resample.blendheaders = False\n",
- "im3.tweakreg.skip = False\n",
- "im3.tweakreg.enforce_user_order = True\n",
- "im3.tweakreg.fitgeometry = 'shift' # valid values are 'shift', 'rscale', 'general'\n",
- "im3.tweakreg.expand_refcat = True\n",
- "im3.tweakreg.minobj = 5\n",
- "im3.tweakreg.use2dhist = True\n",
- "#im3.tweakreg.snr_threshold = 2000.0\n",
- "im3.tweakreg.save_catalogs = True\n",
- "#im3.tweakreg.xoffset = 7.5\n",
- "#im3.tweakreg.yoffset = -7.2\n",
- "im3.tweakreg.searchrad = 0.1 \n",
- "im3.skymatch.skymethod = 'global' # valid values are: 'local', 'global', 'match', or 'global+match'\n",
- "im3.source_catalog.skip = False \n",
- "im3.output_file = 'lmc-f150w-02.fits'\n",
- "\n",
- "# Execute \n",
- "# -------\n",
- "\n",
- "im3.run(jsonfile)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Load data into [Imviz](https://jdaviz.readthedocs.io/en/latest/imviz/index.html)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from jdaviz import Imviz\n",
- "imviz = Imviz()\n",
- "imviz.show_in_sidecar()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "imviz.load_data('lmc-f150w-02_i2d.fits')\n",
- "viewer = imviz.default_viewer\n",
- "viewer.cuts = '95%'\n",
- "viewer.colormap_options\n",
- "viewer.set_colormap('viridis')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.8.10"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/preimaging/requirements.txt b/notebooks/preimaging/requirements.txt
deleted file mode 100644
index 870852976..000000000
--- a/notebooks/preimaging/requirements.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-jdaviz >= 2.6.1
-astropy >= 5.1
-matplotlib >= 3.5.2
-pyyaml>=5.3.1
-healpy>=1.12.5
-shapely>=1.7.0
-mirage==2.2.1
-jwst==1.5.4
diff --git a/notebooks/preimaging/sync_data.py b/notebooks/preimaging/sync_data.py
deleted file mode 100644
index a6e4c629e..000000000
--- a/notebooks/preimaging/sync_data.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from mirage.reference_files import downloader
-download_path = os.environ['MIRAGE_DATA']
-downloader.download_reffiles(download_path, instrument='all', dark_type='linearized', skip_darks=False, skip_cosmic_rays=False, skip_psfs=False, skip_grism=False)
diff --git a/notebooks/redshift_crosscorr/redshift_with_crosscorr.ipynb b/notebooks/redshift_crosscorr/redshift_with_crosscorr.ipynb
deleted file mode 100644
index 8eee4057c..000000000
--- a/notebooks/redshift_crosscorr/redshift_with_crosscorr.ipynb
+++ /dev/null
@@ -1,723 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Measuring Galaxy Redshifts with Cross-correlation\n",
- "\n",
- "This notebook attempts to follow the workflow that uses IRAF tasks, described here http://tdc-www.harvard.edu/iraf/rvsao/xcsao/xcsao.proc.html\n",
- "\n",
- "Observed spectrum from LEGA-C: LEGA-C is a galaxy survey of about 3000 galaxies at z~0.8 and M* > 10^10 M_sun in the COSMOS field. The spectra sample the rest-frame optical between ~3000A and 5000A at high resolution and very high signal-to-noise ratio. More information about the survey can be found here: http://www.mpia.de/home/legac/\n",
- "\n",
- "Template from Pacifici et al. 2012.\n",
- "\n",
- "**Developer Notes:**\n",
- " - This workflow will be rendered in a few simple clicks in specviz\n",
- " - Preparing the template outside the correlation function allows for applications to different science cases\n",
- "\n",
- "Author: Ivo Busko"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "import numpy as np\n",
- "\n",
- "from scipy.signal.windows import tukey\n",
- "import astropy\n",
- "import astropy.units as u\n",
- "from astropy.table import QTable\n",
- "from astropy.nddata import StdDevUncertainty\n",
- "from astropy.modeling.polynomial import Chebyshev1D\n",
- "from astropy import constants as const\n",
- "from astropy.io import fits, ascii\n",
- "from astropy.wcs import WCS\n",
- "\n",
- "import specutils\n",
- "from specutils.fitting import continuum, find_lines_threshold, find_lines_derivative\n",
- "from specutils import Spectrum1D\n",
- "from specutils.manipulation import FluxConservingResampler, SplineInterpolatedResampler, LinearInterpolatedResampler\n",
- "from specutils.analysis import correlation\n",
- "from specutils import SpectralRegion\n",
- "from specutils.manipulation import extract_region\n",
- "from specutils.manipulation import linear_exciser\n",
- "from specutils.manipulation import noise_region_uncertainty\n",
- "from specutils.manipulation import gaussian_smooth, convolution_smooth\n",
- "\n",
- "# Check versions\n",
- "print(\"Numpy: \",np.__version__)\n",
- "print(\"Astropy: \",astropy.__version__)\n",
- "print(\"Specutils: \",specutils.__version__)\n",
- "print(\"\")\n",
- "print(\"They should be:\")\n",
- "print(\"Numpy: 1.18.1\")\n",
- "print(\"Astropy: 4.0\")\n",
- "print(\"Specutils: 1.0\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Matplotlib setup for plotting\n",
- "There are two versions\n",
- " - `notebook` -- gives interactive plots, but makes the overall notebook a bit harder to scroll\n",
- " - `inline` -- gives non-interactive plots for better overall scrolling"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import matplotlib.pyplot as plt\n",
- "\n",
- "# Use this version for non-interactive plots (easier scrolling of the notebook)\n",
- "%matplotlib inline\n",
- "\n",
- "# Use this version if you want interactive plots\n",
- "# %matplotlib notebook\n",
- "\n",
- "# These gymnastics are needed to make the sizes of the figures\n",
- "# be the same in both the inline and notebook versions\n",
- "%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}\n",
- "\n",
- "plt.rcParams['savefig.dpi'] = 80\n",
- "plt.rcParams['figure.dpi'] = 80"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Define data files:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Files are on box\n",
- "\n",
- "# Observation and weight.\n",
- "file1d = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/redshift_crosscorr/legac_M1_v3.7_spec1d_130902.fits'\n",
- "file1dwht = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/redshift_crosscorr/legac_M1_v3.7_wht1d_130902.fits'\n",
- "# Template.\n",
- "template_file = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/redshift_crosscorr/00006.dat'\n",
- "\n",
- "# Plot limits\n",
- "sp_xlim = [3000., 9000.]"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Read observation and template:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Observation\n",
- "hdu1d = fits.open(file1d)\n",
- "hdu1dwht = fits.open(file1dwht)\n",
- "\n",
- "flux = hdu1d[0].data\n",
- "wht = hdu1dwht[0].data\n",
- "unc = 1./ np.sqrt(wht)\n",
- "wave = WCS(hdu1d[0]).pixel_to_world(np.arange(len(hdu1d[0].data)), 0)[0]\n",
- "\n",
- "spec_unit = u.Unit('10^-19 erg s^-1 cm^-2 angstrom^-1')\n",
- "dataspec = QTable([wave*u.angstrom, flux*spec_unit, wht, unc*spec_unit], \n",
- " names=('wavelength','flux','weight','uncertainty'))\n",
- "dataspec_sub = dataspec[dataspec['weight']>0.]\n",
- "dataspec_sub"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Now make it into a Spectrum1D instance.\n",
- "obs = Spectrum1D(spectral_axis=dataspec_sub['wavelength'], \n",
- " flux=dataspec_sub['flux'], \n",
- " uncertainty=StdDevUncertainty(dataspec_sub['uncertainty']))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Template\n",
- "template = ascii.read(template_file)\n",
- "factor = 2.E-5 * obs.flux.unit # normalize template to a sensible range\n",
- "template = Spectrum1D(spectral_axis=template['col1']*u.AA, \n",
- " flux=template['col2']*factor)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Note that in this and in subsequent plots, we are showing just the wavelength range of\n",
- "# interest. The template covers a significantly wider range.\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(obs.wavelength, obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(template.wavelength, template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('Input data')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Preprocess Spectra"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Subtract continuum"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "continuum_model = continuum.fit_generic_continuum(obs) \n",
- "p_obs = obs - continuum_model(obs.wavelength)\n",
- "continuum_model = continuum.fit_generic_continuum(template, model=Chebyshev1D(5)) \n",
- "p_template = template - continuum_model(template.wavelength)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(p_obs.wavelength, p_obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(p_template.wavelength, p_template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('After continuum subtraction')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Smooth observed spectrum"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The IRAF task XCORR works in Fourier space. In there, it applies a cosine bell filter (raised-cosine filter) to the observed spectrum, before multiplying together the two Fourier transforms. Here, we are working in data space, thus we emulate the filter operation by convolving the observed spectrum with a windowed sinc smoothing function."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Developer Notes:**\n",
- "\n",
- "* We should implement this window function in specutils so the user doe\n",
- "sn't have to: https://github.com/astropy/specutils/issues/636\n",
- "* We should implement a fourier-space version of the xcorr as well as t\n",
- "he current freq/wave-space version"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# Smooth data with sinc kernel\n",
- "fc = 0.25 # Cutoff frequency as a fraction of the sampling rate (in (0, 0.5)).\n",
- "b = 0.49 # Transition band, as a fraction of the sampling rate (in (0, 0.5)).\n",
- "\n",
- "# The IRAF task uses the above values. Here, we try\n",
- "# a much lower cutoff frequency to really dampen the\n",
- "# high frequency structure in the observed spectrum.\n",
- "fc = 0.05 \n",
- "\n",
- "N = int(np.ceil((4 / b)))\n",
- "if not N % 2: # N must be odd\n",
- " N += 1\n",
- "n = np.arange(N)\n",
- " \n",
- "# Compute sinc filter and Blackman window. Multiply filter \n",
- "# by window and normalize to get unity gain.\n",
- "filt = np.sinc(2 * fc * (n - (N - 1) / 2))\n",
- "w = 0.42 - 0.5 * np.cos(2 * np.pi * n / (N - 1)) + \\\n",
- " 0.08 * np.cos(4 * np.pi * n / (N - 1))\n",
- "filt *= w\n",
- "filt /= np.sum(filt)\n",
- "\n",
- "# Smooth\n",
- "p_obs_smoothed = convolution_smooth(p_obs, filt)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(p_obs_smoothed.wavelength, p_obs_smoothed.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(p_template.wavelength, p_template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('After smoothing')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Cross correlate"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "# Correlation. \n",
- "#\n",
- "# With no additional specifications, both the entire template and entire spectrum \n",
- "# will be included in the correlation computation. This in general will incur in \n",
- "# a significant increase in execution time. It is advised that the template is cut\n",
- "# to work only on the useful region.\n",
- "\n",
- "corr, lag = correlation.template_correlate(p_obs_smoothed, p_template)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.plot(lag, corr, linewidth=0.5)\n",
- "plt.xlim(0,300000)\n",
- "plt.xlabel(lag.unit)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Redshift based on maximum\n",
- "index_peak = np.where(corr == np.amax(corr))[0][0]\n",
- "v = lag[index_peak]\n",
- "z = v / const.c.to('km/s')\n",
- "print(\"Peak maximum at: \", v)\n",
- "print(\"Redshift from peak maximum: \", z)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Fit correlation peak"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Redshift based on parabolic fit to mazimum\n",
- "\n",
- "n = 8 # points to the left or right of correlation maximum\n",
- "\n",
- "peak_lags = lag[index_peak-n:index_peak+n+1].value\n",
- "peak_vals = corr[index_peak-n:index_peak+n+1].value\n",
- "p = np.polyfit(peak_lags, peak_vals, deg=2)\n",
- "roots = np.roots(p)\n",
- "\n",
- "v_fit = np.mean(roots) * u.km/u.s # maximum lies at mid point between roots\n",
- "z = v_fit / const.c.to('km/s')\n",
- "\n",
- "print(\"Parabolic fit with maximum at: \", v_fit)\n",
- "print(\"Redshift from parabolic fit: \", z)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Visual check"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "# plt.xlim(sp_xlim)\n",
- "plt.scatter(peak_lags, peak_vals, label='data')\n",
- "plt.plot(peak_lags, np.polyval(p, peak_lags), linewidth=0.5, label='fit')\n",
- "plt.xlabel(lag.unit)\n",
- "plt.legend()\n",
- "plt.title('Fit to correlation peak')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "z_ref = 0.758 # \"true\" redshift, corresponding to 227242.6 km/s\n",
- "\n",
- "template_z = Spectrum1D(spectral_axis=template.wavelength * (1.+z), flux=template.flux)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(obs.wavelength, obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(template_z.wavelength, template_z.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('Redshifted original template and original observed spectrum')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "z_err = (z - z_ref) / z_ref * 100.\n",
- "print(\"Error in the derived redshift: \", z_err, \"%\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Case with lower resolution observed spectrum"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Read observation and template:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Observation\n",
- "hdu1d = fits.open(file1d)\n",
- "hdu1dwht = fits.open(file1dwht)\n",
- "\n",
- "flux = hdu1d[0].data\n",
- "wht = hdu1dwht[0].data\n",
- "unc = 1./ np.sqrt(wht)\n",
- "wave = np.arange(flux.shape[0])*hdu1d[0].header['CD1_1'] + hdu1d[0].header['CRVAL1']\n",
- "\n",
- "spec_unit = u.Unit('10^-19 erg s^-1 cm^-2 angstrom^-1')\n",
- "dataspec = QTable([wave*u.angstrom, flux*spec_unit, wht, unc*spec_unit], \n",
- " names=('wavelength','flux','weight','uncertainty'))\n",
- "dataspec_sub = dataspec[dataspec['weight']>0.]\n",
- "dataspec_sub"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Now make it into a Spectrum1D instance.\n",
- "obs_orig = Spectrum1D(spectral_axis=dataspec_sub['wavelength'], \n",
- " flux=dataspec_sub['flux'], \n",
- " uncertainty=StdDevUncertainty(dataspec_sub['uncertainty']))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Change resolution of spectrum\n",
- "obs = gaussian_smooth(obs_orig, stddev=15)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Template\n",
- "template = ascii.read(template_file)\n",
- "factor = 2.E-5 * obs.flux.unit # normalize template to a sensible range\n",
- "template = Spectrum1D(spectral_axis=template['col1']*u.AA, \n",
- " flux=template['col2']*factor)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Note that in this and in subsequent plots, we are showing just the wavelength range of\n",
- "# interest. The template covers a significantly wider range.\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(obs.wavelength, obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(template.wavelength, template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('Input data')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Preprocess Spectra\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Subtract continuum"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "continuum_model = continuum.fit_generic_continuum(obs) \n",
- "p_obs = obs - continuum_model(obs.wavelength)\n",
- "continuum_model = continuum.fit_generic_continuum(template, model=Chebyshev1D(5)) \n",
- "p_template = template - continuum_model(template.wavelength)\n",
- "\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(p_obs.wavelength, p_obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(p_template.wavelength, p_template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('After continuum subtraction')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Smooth observed spectrum\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Smooth data with sinc kernel\n",
- "fc = 0.25 # Cutoff frequency as a fraction of the sampling rate (in (0, 0.5)).\n",
- "b = 0.49 # Transition band, as a fraction of the sampling rate (in (0, 0.5)).\n",
- "\n",
- "# The IRAF task uses the above values. Here, we try\n",
- "# a much lower cutoff frequency to really dampen the\n",
- "# high frequency structure in the observed spectrum.\n",
- "fc = 0.05 \n",
- "\n",
- "N = int(np.ceil((4 / b)))\n",
- "if not N % 2: # N must be odd\n",
- " N += 1\n",
- "n = np.arange(N)\n",
- " \n",
- "# Compute sinc filter and Blackman window. Multiply filter \n",
- "# by window and normalize to get unity gain.\n",
- "filt = np.sinc(2 * fc * (n - (N - 1) / 2))\n",
- "w = 0.42 - 0.5 * np.cos(2 * np.pi * n / (N - 1)) + \\\n",
- " 0.08 * np.cos(4 * np.pi * n / (N - 1))\n",
- "filt *= w\n",
- "filt /= np.sum(filt)\n",
- "\n",
- "# Smooth\n",
- "p_obs_smoothed = convolution_smooth(p_obs, filt)\n",
- "\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(p_obs_smoothed.wavelength, p_obs_smoothed.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(p_template.wavelength, p_template.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('After smoothing')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Cross correlate"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "corr, lag = correlation.template_correlate(p_obs_smoothed, p_template)\n",
- "\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.plot(lag, corr, linewidth=0.5)\n",
- "plt.xlim(0,300000)\n",
- "plt.xlabel(lag.unit)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Redshift based on maximum\n",
- "index_peak = np.where(corr == np.amax(corr))[0][0]\n",
- "v = lag[index_peak]\n",
- "z = v / const.c.to('km/s')\n",
- "print(\"Peak maximum at: \", v)\n",
- "print(\"Redshift from peak maximum: \", z)\n",
- "\n",
- "# Redshift based on parabolic fit to mazimum\n",
- "\n",
- "n = 8 # points to the left or right of correlation maximum\n",
- "\n",
- "peak_lags = lag[index_peak-n:index_peak+n+1].value\n",
- "peak_vals = corr[index_peak-n:index_peak+n+1].value\n",
- "p = np.polyfit(peak_lags, peak_vals, deg=2)\n",
- "roots = np.roots(p)\n",
- "\n",
- "v_fit = np.mean(roots) * u.km/u.s # maximum lies at mid point between roots\n",
- "z = v_fit / const.c.to('km/s')\n",
- "\n",
- "print(\"\")\n",
- "print(\"Parabolic fit with maximum at: \", v_fit)\n",
- "print(\"Redshift from parabolic fit: \", z)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "z_ref = 0.758 # \"true\" redshift, corresponding to 227242.6 km/s\n",
- "\n",
- "template_z = Spectrum1D(spectral_axis=template.wavelength * (1.+z), flux=template.flux)\n",
- "\n",
- "plt.figure()\n",
- "plt.gcf().set_size_inches((8.,4.))\n",
- "plt.xlim(sp_xlim)\n",
- "plt.plot(obs.wavelength, obs.flux, linewidth=0.5, label='obs')\n",
- "plt.plot(template_z.wavelength, template_z.flux, linewidth=0.5, color='r', label='template')\n",
- "plt.legend()\n",
- "plt.title('Redshifted original template and original observed spectrum')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "z_err = (z - z_ref) / z_ref * 100.\n",
- "print(\"Error in the derived redshift: \", z_err, \"%\")"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.3"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/notebooks/redshift_crosscorr/requirements.txt b/notebooks/redshift_crosscorr/requirements.txt
deleted file mode 100644
index 1e8c2af27..000000000
--- a/notebooks/redshift_crosscorr/requirements.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-numpy==1.18.2
-astropy>=4.1
-matplotlib==3.2.1
-scipy==1.4.1
-specutils==1.0
diff --git a/notebooks/soss-transit-spectroscopy/.gitignore b/notebooks/soss-transit-spectroscopy/.gitignore
deleted file mode 100644
index 9b6d896d5..000000000
--- a/notebooks/soss-transit-spectroscopy/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-calibrated_data_hatp1_transit.fits
-hp1_model.txt
-
diff --git a/notebooks/soss-transit-spectroscopy/HAT-P-1b-PART3-data-analysis.ipynb b/notebooks/soss-transit-spectroscopy/HAT-P-1b-PART3-data-analysis.ipynb
deleted file mode 100644
index 296916119..000000000
--- a/notebooks/soss-transit-spectroscopy/HAT-P-1b-PART3-data-analysis.ipynb
+++ /dev/null
@@ -1,1683 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# SOSS Transit Spectroscopy"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Use case:** Primary transit of an exoplanet with SOSS.
\n",
- "**Data:** JWST simulated data with awesimsoss.
\n",
- "**Tools:** awesimsoss, jwst, astropy.
\n",
- "**Cross-intrument:**
\n",
- "**Documentation:** This notebook is part of a STScI's larger [post-pipeline Data Analysis Tools Ecosystem](https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis).
"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Table of contents\n",
- "1. [Introduction](#intro)\n",
- "2. [Spectral extraction](#extraction)\n",
- " 1. [Tracing the orders](#tracing)\n",
- " 2. [Extracting the spectra](#extracting)\n",
- " 3. [Time-stamps and wavelength solution](#timenwavelength)\n",
- "3. [Fitting & analyzing white-light lightcurves](#white-light)\n",
- " 1. [Studying the residuals](#wl-residuals)\n",
- "4. [Fitting & analyzing the wavelength-dependent lightcurves](#wavelength)\n",
- "5. [Studying the transit spectrum of HAT-P-1b](#transit-spectra)\n",
- "\n",
- "1.-Introduction \n",
- "------------------\n",
- "\n",
- "This notebook is part of a series of notebooks that are being prepared by STScI in order to showcase how to simulate, process and analyze JWST observations for a wide range of science cases. Here, we touch on the transiting exoplanet observations science case and, in particular, on spectrophotometric observations of the *primary transit of an exoplanet*. During primary transit, the observed flux decrease due to the planet blocking light from the stellar host is proportional to $\\delta = (R_p/R_*)^2$ --- a quantity known as the *transit depth*, where $R_p$ is the planetary radius and $R_*$ is the stellar radius. Interestingly, the transit depth is wavelength dependent; i.e., $\\delta \\equiv \\delta (\\lambda)$. This is because opacity sources on the planetary atmosphere absorb different amounts of light at different wavelengths and, therefore, the observed planetary radius $R_p$ --- and thus occulted area during transit, $\\delta$ --- is wavelength-dependent (see, e.g., [Kreidberg 2018](https://ui.adsabs.harvard.edu/abs/2018haex.bookE.100K/abstract) for a review). This technique, referred to as *transmission spectroscopy* in what follows, aims at obtaining those transit depths as a function of wavelength in JWST observations through the study of ultra-precise transit lightcurves at different wavelengths.\n",
- "\n",
- "Simulated data for NIRISS/SOSS observations targeting a transit of HAT-P-1b have been generated with the help of the [`awesimsoss` simulator](https://github.com/spacetelescope/awesimsoss/) by the NIRISS team. These simulations, in turn, have been calibrated using the [JWST pipeline](https://jwst-pipeline.readthedocs.io/). HAT-P-1b is a Guaranteed Time Observations (GTO) target of the [NIRISS Exploration of the Atmospheric diversity of Transiting exoplanets (NEAT) program](https://jwst-docs.stsci.edu/jwst-opportunities-and-policies/jwst-cycle-1-guaranteed-time-observations-call-for-proposals/jwst-gto-observation-specifications/jwst-gto-niriss-observations-table), which will be observed using the [NIRISS/SOSS](https://jwst-docs.stsci.edu/near-infrared-imager-and-slitless-spectrograph/niriss-observing-modes/niriss-single-object-slitless-spectroscopy) instrument onboard JWST. This target is also used as an [example science program used in the JWST documentation](https://jwst-docs.stsci.edu/near-infrared-imager-and-slitless-spectrograph/niriss-example-science-programs/niriss-soss-time-series-observations-of-hat-p-1). If you are not familiar with transiting exoplanet observations and/or with how JWST will observe planetary transits, we encourage you to read through that example science program in order to familiarize yourself with the terminology.\n",
- "\n",
- "In this notebook, we analyze the 2D spectral products of the JWST pipeline in order to analyze the transmission spectrum of this exoplanet. In particular, the data we take a look at here has been processed by the JWST pipeline up to the `assign_wcs` step of the pipeline's [Stage 2 processing](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_spec2.html#calwebb-spec2). The simulations didn't include flat-fielding, so we don't apply the flat-field step of the pipeline to them. Although the JWST Pipeline will eventually be able to produce white-light lightcurves, as well as extracted 1D spectra, as of the day of writing of this notebook, the JWST calibration pipeline does not have a spectral extraction algorithm that can deal with the complicated structure of NIRISS/SOSS data. In particular, the `SUBSTRIP256` subarray has data from at least two NIRISS/SOSS orders, which overlap at the reddest wavelengths. An algorithm to properly extract this data is in-the-making, but in this notebook we will be performing our own tracing and extraction of the spectra in order to showcase how to analyze the JWST pipeline products. After extracting the spectrum, we will generate the white-light lightcurves of each order, fit them and compare the extracted parameters to the ones in the literature. Then, we will repeat the procedures for the wavelength-dependent lightcurves, with which we will get the transmission spectrum of the exoplanet. \n",
- "\n",
- "Before we begin, let's import some libraries:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# General internal libraries:\n",
- "import os\n",
- "import numpy as np\n",
- "import pickle\n",
- "import matplotlib.pyplot as plt\n",
- "from numpy.polynomial import chebyshev\n",
- "from scipy.ndimage import gaussian_filter1d\n",
- "from mpl_toolkits.axes_grid1.inset_locator import mark_inset\n",
- "from scipy.interpolate import interp1d\n",
- "\n",
- "# Libraries for plotting, reading data:\n",
- "import seaborn as sns \n",
- "sns.set_style(\"ticks\") # Set seaborn \"ticks\" style for better styled plots\n",
- "from astropy.io import fits\n",
- "from astropy.utils.data import download_file\n",
- "# Library for some power-spectral density analysis:\n",
- "from astropy.timeseries import LombScargle\n",
- "\n",
- "# Useful library to obtain wavelength map solution from the JWST pipeline:\n",
- "from jwst import datamodels\n",
- "\n",
- "# Corner (for posterior distribution plotting):\n",
- "import corner\n",
- "# Juliet (for transit fitting & model evaluation:)\n",
- "import juliet"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "2.-Spectral extraction \n",
- "--------------------------------------------------------------------\n",
- "### A.-Tracing the orders\n",
- "Before we can go ahead and perform spectral extraction of the orders, we need to trace them in order to guide the spectral extraction algorithm on where the spectrum is. Let's first do some data exploration to understand how we might do this with the complicated spectral profile of NIRISS/SOSS. \n",
- "\n",
- "The products with which we will be working with are the so-called \"rates\". These combine the information of all the groups in a given integration into one number, which is the slope of the up-the-ramp samples in a given integration (so they have units of counts per second). These calibrated data have been already corrected by various inhomogeneities present in the raw JWST data, such as bias and dark current. Let's download them (or load them if they are already on the same folder as this notebook), along with some supplementary data we will be using for this notebook --- if downloading, be patient, as this might take a while (the calibrated data are 12 GB worth of data!): "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "base_data_url = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/soss-transit-spectroscopy'\n",
- "transit_model_path = 'hp1_model.txt'\n",
- "transit_fits_results_path = 'transit_spectra_results.pkl'\n",
- "\n",
- "if not os.path.exists('calibrated_data_hatp1_transit.fits'):\n",
- " file_path = download_file(base_data_url + '/data/calibrated_data_hatp1_transit.fits')\n",
- " os.rename(file_path, 'calibrated_data_hatp1_transit.fits')\n",
- " \n",
- "if not os.path.exists(transit_model_path):\n",
- " file_path = download_file(base_data_url + '/data/hp1_tspec.dat')\n",
- " os.rename(file_path, transit_model_path)\n",
- " \n",
- "if not os.path.exists(transit_fits_results_path):\n",
- " file_path = download_file(base_data_url + '/data/transit_spectra_results.pkl')\n",
- " os.rename(file_path, transit_fits_results_path)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's open the calibrated data, and explore its format and content:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "hdul = fits.open('calibrated_data_hatp1_transit.fits')\n",
- "hdul.info()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The data we will be mostly interested in working with is mainly on the `SCI` and `ERR` keys --- the former has the science data (the \"rates\") and the latter their errors. There is a lot of extra information in these pipeline products (e.g., the wavelength solution is here, as well as the time-stamps of the observations), but we will get to that in time. As can be seen, the `SCI` and `ERR` keys have three dimensions: the 2048 and 256 dimensions are the spatial ones (i.e., the 2D spectral dimensions). 1198, on the other hand, is the number of integrations of this particular simulated observation. This latter dimension is specific to time-series observations data/analysis, as here we are interested in using the rates for each integration separately.\n",
- "\n",
- "Let's extract the science data (`SCI`) and errors (`ERR`) then:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "data = hdul['SCI'].data\n",
- "errors = hdul['ERR'].data"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "There are several ways moving forward to trace the spectrum. In theory, when JWST is on the sky, precise shapes and positions will be available for them, and we could use those. Here, however, we will trace each integration individually, in order to check for possible movements/shape changes of the trace during the observations. Before doing this, however, let's generate a \"median\" image by collapsing all the products in time, so as to have a view of the \"average\" shape of the traces along the entire exposure:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "median_image = np.median(data, axis=0)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's plot it:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(20, 2))\n",
- "im = plt.imshow(median_image)\n",
- "im.set_clim(-5, 20)\n",
- "cb = plt.colorbar()\n",
- "cb.ax.set_ylabel('Rates (Counts/s)')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This plot tells us a lot about how to move forward with the data analysis in general of this dataset. For example, note that although the pipeline has done a good job at removing structure in the data, there is still some structure which appears as vertical strips. This is a well known pattern due to the readout of IR detectors which is typically referred to as \"1/f noise\". By eye it seems some simple column-by-column background substraction should take care of that. On top of this, this image tells us a little bit of the care we have to have with tracing. For example, spectra of order 1 and 2 start to overlap around pixel ~750. In addition, we can see how the right side of both traces are the ones that have the most signal, which decreases to the left-hand side (redder wavelengths) --- so if we do any kind of iterative tracing, we should perhaps start in those sides of the trace. With tracing we also have to be careful with the sides of the image: both sides have 4 reference pixels each (i.e, pixels 0 to 3 and 2044 to 2048) which are not sensitive to light, so we have to be careful to leave those out of our analyses. Finally, it is important to note that the order in the bottom of this image (order 2) ends rather abruptly at around column 1750; this is actually a problem of the simulations, and not a real feature of the NIRISS/SOSS traces. \n",
- "\n",
- "Let's use all of the above to our advantage to trace the spectra of each integration. First, let's write a small script to get the centroids of each column for each order on each integration. Here we'll use a very simple algorithm in which we convolve each column with a gaussian filter (in order to smooth the profile, which is rather complex), and then take the centroiding of that filtered image. We repeat this procedure in each of the columns of the image in order to obtain the centroids of the traces, and then we fit them with a polynomial:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def trace_spectrum(image, xstart, ystart, profile_radius=30, gauss_filter_width=10, xend=None):\n",
- " \"\"\"\n",
- " Function that non-parametrically traces NIRISS/SOSS spectra. First, to get the centroid at xstart and \n",
- " ystart, it convolves the spatial profile with a gaussian filter, finding its peak through usual flux-weighted \n",
- " centroiding. Next, this centroid is using to find the one left to it through the same algorithm. \n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " image: ndarray\n",
- " The image that wants to be traced.\n",
- " xstart: float\n",
- " The x-position (column) on which the tracing algorithm will be started\n",
- " ystart: float\n",
- " The estimated y-position (row) of the center of the trace. An estimate within 10-20 pixels is enough.\n",
- " profile_radius: float\n",
- " Expected radius of the profile measured from its center. Only this region will be used to estimate \n",
- " the centroids of the spectrum.\n",
- " gauss_filter_width: float\n",
- " Width of the gaussian filter used to perform the centroiding of the first column\n",
- " xend: int\n",
- " x-position at which tracing ends. If none, trace all the columns left to xstart.\n",
- " \"\"\"\n",
- " \n",
- " # Define x-axis:\n",
- " if xend is not None:\n",
- " x = np.arange(xend, xstart)\n",
- " else:\n",
- " x = np.arange(0, xstart)\n",
- " \n",
- " # Define y-axis:\n",
- " y = np.arange(image.shape[0])\n",
- " \n",
- " # Define array that will save centroids at each x:\n",
- " ycentroids = np.zeros(len(x))\n",
- " \n",
- " for i in range(len(x))[::-1]:\n",
- " xcurrent = x[i]\n",
- " \n",
- " # Convolve column with a gaussian filter; remove median before convolving:\n",
- " filtered_column = gaussian_filter1d(image[:,xcurrent] - np.median(image[:,xcurrent]), gauss_filter_width)\n",
- " \n",
- " # Find centroid within profile_radius pixels of the initial guess:\n",
- " idx = np.where(np.abs(y-ystart)-Extracting the spectra\n",
- "\n",
- "With our traces at hand, in theory we can now perform simple extraction of the spectra on each integration. Before doing that, however, let's correct our images for the 1/f-noise patters on a column-by-column basis using the fact that we now know where the spectra is located, so we can mask the traces out of this procedure.\n",
- "\n",
- "To this end, for each image we will mask all the pixels in a 30-pixel radius around the traces (more or less the radius of the actual portion of the trace that contains flux from the target), and use the remaining pixels to track the column-to-column variations. Let's apply these corrections to the data:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# We will save the corrected data in a new array, as to keep track of the original dataset:\n",
- "corrected_data = np.copy(data)\n",
- "radius = 30\n",
- "for i in range(nintegrations):\n",
- " for j in range(data.shape[2]):\n",
- " \n",
- " # Create mask that will turn to zero values not to be used for background estimation:\n",
- " mask = np.ones(data.shape[1])\n",
- " \n",
- " if j in X1[i,:]:\n",
- " y1 = int(chebyshev.chebval(j, coeffs1[i,:]))\n",
- " mask[y1 - radius : y1 + radius] = 0.\n",
- " \n",
- " if j in X2[i,:]:\n",
- " y2 = int(chebyshev.chebval(j, coeffs2[i,:]))\n",
- " mask[y2 - radius : y2 + radius] = 0.\n",
- " \n",
- " # Use only pixels that are not zero to calculate background through median:\n",
- " idx = np.where(mask != 0)[0]\n",
- " corrected_data[i,:,j] = corrected_data[i,:,j] - np.median(corrected_data[i,idx,j])"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "All right, let's now see how we did --- to this end, let's check the corrections made on the first integration:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(17, 12))\n",
- "plt.title('Non-corrected image')\n",
- "im = plt.imshow(data[0,:,:])\n",
- "im.set_clim(-5, 10)\n",
- "plt.figure(figsize=(17, 12))\n",
- "plt.title('Corrected image')\n",
- "im = plt.imshow(corrected_data[0,:,:])\n",
- "im.set_clim(-5, 10)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Much better! The strips observed in the original image have been corrected quite successfully thanks to our procedure.\n",
- "\n",
- "Let's now move forward and write a small script that is able to do simple aperture extraction given a set of x and y coordinates that follow the trace, and loop that through all of our integrations. In theory before doing this we would take care of bad pixels/cosmic rays, but we don't worry about this on this notebook because (a) cosmic rays have not been included in the simulations made to create this dataset and (b) bad pixels are the same on each image, so they don't impact any time-varying signal.\n",
- "\n",
- "First, the aperture extraction function:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def aperture_extraction(image, x, y, aperture_radius, background_radius=50, error_image=None, correct_bkg=True):\n",
- " \"\"\"\n",
- " This function takes as inputs two arrays (x,y) that follow the trace, \n",
- " and returns the added flux over the defined aperture radius (and its error, if an error image \n",
- " is given as well), substracting in the way any background between the aperture radius and the \n",
- " background radius. The background is calculated by taking the median of the points between the \n",
- " aperture_radius and the background_radius.\n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " image: ndarray\n",
- " Image from which the spectrum wants to be extracted\n",
- " x: ndarray\n",
- " Array with the x-axis of the trace (i.e., the columns, wavelength direction)\n",
- " y: ndarray\n",
- " Array with the y-axis of the trace (i.e., rows, spatial direction)\n",
- " aperture_radius: float\n",
- " Distance from the center of the trace at which you want to add fluxes.\n",
- " background_radius: float\n",
- " Distance from the center of the trace from which you want to calculate the background. The \n",
- " background region will be between this radius and the aperture_radius.\n",
- " error_image: ndarray\n",
- " Image with the errors of each pixel value on the image ndarray above\n",
- " correct_bkg: boolean\n",
- " If True, apply background correction. If false, ommit this.\n",
- " \"\"\"\n",
- " \n",
- " # Create array that will save our fluxes:\n",
- " flux = np.zeros(len(x))\n",
- " \n",
- " if error_image is not None:\n",
- " flux_error = np.zeros(len(x))\n",
- " \n",
- " max_column = image.shape[0]\n",
- " for i in range(len(x)):\n",
- " \n",
- " # Cut the column with which we'll be working with:\n",
- " column = image[:,int(x[i])]\n",
- " if error_image is not None:\n",
- " variance_column = error_image[:,int(x[i])]**2\n",
- " \n",
- " # Define limits given by the aperture_radius and background_radius variables:\n",
- " if correct_bkg:\n",
- " left_side_bkg = np.max([y[i] - background_radius, 0])\n",
- " right_side_bkg = np.min([max_column, y[i] + background_radius])\n",
- " left_side_ap = np.max([y[i] - aperture_radius, 0])\n",
- " right_side_ap = np.min([max_column, y[i] + aperture_radius])\n",
- " \n",
- " # Extract background, being careful with edges:\n",
- " if correct_bkg:\n",
- " bkg_left = column[np.max([0, int(left_side_bkg)]) : np.max([0, int(left_side_ap)])]\n",
- " bkg_right = column[np.min([int(right_side_ap), max_column]) : np.max([int(right_side_bkg), max_column])]\n",
- " bkg = np.median(np.append(bkg_left, bkg_right))\n",
- " else:\n",
- " bkg = 0.\n",
- " \n",
- " # Substract it from the column:\n",
- " column -= bkg\n",
- " \n",
- " # Perform aperture extraction of the background-substracted column, being careful with pixelization \n",
- " # at the edges. First, deal with left side:\n",
- " l_decimal, l_integer = np.modf(left_side_ap)\n",
- " l_integer = int(l_integer)\n",
- " if l_decimal < 0.5:\n",
- " l_fraction = (0.5 - l_decimal) * column[l_integer]\n",
- " l_limit = l_integer + 1\n",
- " if error_image is not None:\n",
- " l_fraction_variance = ((0.5 - l_decimal)**2) * variance_column[l_integer]\n",
- " else:\n",
- " l_fraction = (1. - (l_decimal - 0.5)) * column[l_integer + 1]\n",
- " l_limit = l_integer + 2\n",
- " if error_image is not None:\n",
- " l_fraction_variance = ((1. - (l_decimal - 0.5))**2) * variance_column[l_integer + 1]\n",
- " \n",
- " # Now right side:\n",
- " r_decimal, r_integer = np.modf(right_side_ap)\n",
- " r_integer = int(r_integer)\n",
- " if r_decimal < 0.5:\n",
- " r_fraction = (1. - (0.5 - r_decimal)) * column[r_integer]\n",
- " r_limit = r_integer\n",
- " if error_image is not None:\n",
- " r_fraction_variance = ((1. - (0.5 - r_decimal))**2) * variance_column[r_integer]\n",
- " else:\n",
- " r_fraction = (r_decimal - 0.5) * column[r_integer + 1]\n",
- " r_limit = r_integer + 1\n",
- " if error_image is not None:\n",
- " r_fraction_variance = ((r_decimal - 0.5)**2) * variance_column[r_integer + 1]\n",
- " \n",
- " # Save total flux in current column:\n",
- " flux[i] = l_fraction + r_fraction + np.sum(column[l_limit:r_limit])\n",
- " \n",
- " if error_image is not None:\n",
- " # For the flux error, ommit edge values (contribution to total variance is small nonetheless):\n",
- " flux_error[i] = np.sqrt(np.sum(variance_column[l_limit:r_limit]) + l_fraction_variance + \\\n",
- " r_fraction_variance)\n",
- " \n",
- " if error_image is not None:\n",
- " return flux, flux_error\n",
- " else:\n",
- " return flux"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let us now extract the spectra. To this end, we have to define an aperture radius for the extraction --- we decide to use a 30-pixel aperture extraction, and a 50-pixel background radius. As shown in the following plot, these regions cover both the spectra and also a big chunk of the background region(s):"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(17, 12))\n",
- "im = plt.imshow(median_image)\n",
- "im.set_clim(-5, 20)\n",
- "plt.plot(X1[0,:], chebyshev.chebval(X1[0,:], coeffs1[0,:]), lw=3, label='Order 1 trace', color='orangered')\n",
- "plt.plot(X2[0,:], chebyshev.chebval(X2[0,:], coeffs2[0,:]), lw=3, label='Order 2 trace', color='cornflowerblue')\n",
- "\n",
- "plt.fill_between(X1[0,:], chebyshev.chebval(X1[0,:], coeffs1[0,:]) + 30, chebyshev.chebval(X1[0,:], coeffs1[0,:]) - 30,\n",
- " color='cornflowerblue', alpha=0.9)\n",
- "plt.fill_between(X2[0,:], chebyshev.chebval(X2[0,:], coeffs2[0,:]) + 30, chebyshev.chebval(X2[0,:], coeffs2[0,:]) - 30,\n",
- " color='orangered', alpha=0.9)\n",
- "plt.ylim(data.shape[1], 0)\n",
- "plt.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now let's loop over all the integrations, extract the spectra of both orders, and save that to some dictionaries. We note that this will take a while (around 10 minutes). We will save both the spectra and the columns, so we can later relate the latter to wavelength-space:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Extraction parameters:\n",
- "extraction_aperture = 30\n",
- "background_aperture = 50\n",
- "\n",
- "# Create dictionary:\n",
- "spectra = {}\n",
- "# Generate sub-dictionaries for each order:\n",
- "spectra['order1'], spectra['order2'] = {}, {}\n",
- "# Save the X positions for both orders. X positions are the same for all integrations, so \n",
- "# we save the ones corresponding to the first integration:\n",
- "spectra['order1']['x'], spectra['order2']['x'] = X1[0,:], X2[0,:]\n",
- "# Create sub-dictionaries that will save the fluxes and the errors on those fluxes:\n",
- "spectra['order1']['flux'], spectra['order2']['flux'] = np.zeros([data.shape[0], len(X1[0,:])]),\\\n",
- " np.zeros([data.shape[0], len(X2[0,:])])\n",
- "\n",
- "spectra['order1']['flux_errors'], spectra['order2']['flux_errors'] = np.zeros([data.shape[0], len(X1[0,:])]),\\\n",
- " np.zeros([data.shape[0], len(X2[0,:])])\n",
- "\n",
- "# Now iterate through all integrations:\n",
- "for i in range(nintegrations):\n",
- " # Trace order 1:\n",
- " y1 = chebyshev.chebval(X1[0,:], coeffs1[i,:])\n",
- " # Extract order 1:\n",
- " spectra['order1']['flux'][i,:], spectra['order1']['flux_errors'][i,:] = \\\n",
- " aperture_extraction(corrected_data[i,:,:], X1[0,:], y1, \n",
- " extraction_aperture, \n",
- " error_image=errors[i,:,:], \n",
- " correct_bkg=False)\n",
- " # Same for Order 2:\n",
- " y2 = chebyshev.chebval(X2[0,:],coeffs2[i,:])\n",
- " spectra['order2']['flux'][i,:], spectra['order2']['flux_errors'][i,:] = \\\n",
- " aperture_extraction(corrected_data[i,:,:], X2[0,:], y2, \n",
- " extraction_aperture, \n",
- " error_image=errors[i,:,:], \n",
- " correct_bkg=False)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Finally, let's plot the spectra of the first integration along with the errorbars:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "i = 0\n",
- "fig, ax = plt.subplots(figsize=(15, 5))\n",
- "ax.set_title('Order 1 spectrum for the first integration')\n",
- "ax.errorbar(spectra['order1']['x'], spectra['order1']['flux'][i,:], \\\n",
- " yerr=spectra['order1']['flux_errors'][i,:], color='cornflowerblue')\n",
- "\n",
- "ax.set_xlim(np.min(spectra['order1']['x']), np.max(spectra['order1']['x']))\n",
- "\n",
- "ax.set_xlabel('Pixel column')\n",
- "ax.set_ylabel('Counts/s')\n",
- "\n",
- "plt.figure(figsize=(15, 5))\n",
- "plt.title('Order 2 spectrum for the first integration')\n",
- "plt.errorbar(spectra['order2']['x'], spectra['order2']['flux'][i,:], \\\n",
- " yerr=spectra['order2']['flux_errors'][i,:], color='orangered')\n",
- "plt.xlabel('Pixel column')\n",
- "plt.ylabel('Counts/s')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This is pretty good! It is interesting to see how we can actually identify where the contamination from the second order kicks in in the Order 1 spectra (blue) around pixel ~750. This is actually evident from the images above, and something to keep in mind when performing analyses with this data.\n",
- "\n",
- "Note: the flattening of the spectra for pixels above ~1700 in order 2 is a bug from the simulator."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### C. Time-stamps and wavelength solution\n",
- "\n",
- "Before continuing to the next step, we need to extract two extra data products that will become useful in our analyses in this notebook: (a) the time-stamps of each integration and (b) the wavelength solution corresponding to each pixel in the frame for Order 1 and Order 2.\n",
- "\n",
- "The first is the easiest to extract --- these will be typically stored in the very same data products of the pipeline we are using here. In particular, recall that each integration is composed of various groups which sample up-the-ramp. These have been, in turn, combined in the rates that we see here. Perhaps the most useful timing measure for those ramps, thus, is the middle of an integration --- these can be extracted from the products as follows:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "spectra['times'] = hdul['INT_TIMES'].data['int_mid_BJD_TDB']"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "As for the wavelength solution, there is a particular step in the JWST calibration pipeline that \"attaches\" this data to the products --- these have been attached to our dataset as well. To extract them, one has to re-open the pipeline products with a so-called `datamodel` from the JWST pipeline, that lets you gain access to this information:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "exposure = datamodels.SpecModel('calibrated_data_hatp1_transit.fits')\n",
- "\n",
- "# Get number of rows and columns on the first integration:\n",
- "rows,columns = data[0,:,:].shape\n",
- "\n",
- "# Prepare map that will save the wavelength corresponding to each pixel in the frame:\n",
- "wmap = np.zeros([2, rows, columns])\n",
- "\n",
- "# Save it:\n",
- "for order in [1,2]:\n",
- " for row in range(rows):\n",
- " for column in range(columns):\n",
- " wmap[order - 1, row, column] = exposure.meta.wcs(column, row, order)[-1]"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's visualize this 2D wavelength solution for Order 1 and 2:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(20, 2))\n",
- "plt.title('Order 1 wavelength map')\n",
- "im1 = plt.imshow(wmap[0,:,:])\n",
- "im1.set_clim(0.6, 3.0)\n",
- "im1.set_cmap('Reds')\n",
- "cb1 = plt.colorbar()\n",
- "\n",
- "plt.figure(figsize=(20, 2))\n",
- "plt.title('Order 2 wavelength map')\n",
- "im2 = plt.imshow(wmap[1,:,:])\n",
- "im2.set_clim(0.6, 1.5)\n",
- "im2.set_cmap('Blues')\n",
- "cb2 = plt.colorbar()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "One important caveat to have in mind is that the wavelength maps are not strictly vertical, i.e., wavelengths do not align perfectly with the columns of the image for NIRISS/SOSS observations. An easy way to see this is to have an image showing \"iso-wavelength\" bands --- identify pixels that have the same wavelengths and \"paint\" them in a plot. Let's do this for Order 1:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(20, 25))\n",
- "Z = np.zeros(wmap[0,:,:].shape)\n",
- "for w in np.linspace(0.5, 3., 100):\n",
- " wmin,wmax = w,w+0.005\n",
- " idx = (wmap[0,:,:] > wmin) & (wmap[0,:,:] < wmax)\n",
- " Z[idx] = 1.\n",
- "plt.imshow(Z, interpolation=None)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "As can be seen from these maps, this is not extremely critical for NIRISS/SOSS (iso-wavelength bands span ~3 pixels), but it might be important for precise, higher-resolution work to take this into account in the extraction. For our application here, however, we simply take the average wavelength value per column to associate wavelengths with pixels for each order, and we save that to our dictionary:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "avg_waves = np.mean(wmap, axis=1)\n",
- "spectra['order1']['w'], spectra['order2']['w'] = avg_waves[0,spectra['order1']['x'].astype('int')],\\\n",
- " avg_waves[1,spectra['order2']['x'].astype('int')]"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's have a final look at the extracted spectra of the first 100 integrations, but now with these wavelengths as x-axis instead of the pixel columns:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(15, 5))\n",
- "for i in range(100):\n",
- " plt.plot(spectra['order1']['w'], spectra['order1']['flux'][i,:], color='cornflowerblue', alpha=0.5)\n",
- " plt.plot(spectra['order2']['w'], spectra['order2']['flux'][i,:], color='orangered', alpha=0.5)\n",
- "plt.xlabel('Wavelength (um)')\n",
- "plt.xlim([0.65, 2.83])\n",
- "plt.ylabel('Counts/s')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "That looks pretty good! Let's deep dive into analyzing the lightcurves that can be extracted from these spectra in the next sections."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 3. Fitting & analyzing white-light lightcurves"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Having extracted our spectra, we are ready to jump into the \"fun\" part of this notebook: the analyses of actual simulated NIRISS/SOSS transit lightcurves. Let's first create the white-light lightcurves of both Order 1 and Order 2 by summing the spectra extracted in the previous section, but on regions on which we know there will be no contamination from overlapping orders. This step is important because the white-light lightcurves help determine the most precise transit parameters for the wavelength-dependent lightcurves, and thus we want to have estimates that are as unbiased as possible.\n",
- "\n",
- "From the figures above, it seems that for Order 1 pixels below around 300 and above 1300 have virtually no contamination from Order 2. For Order 2, fluxes from pixels above around 1300 also have virtually no contamination from Order 1. Let's generate the corresponding lightcurves taking that into account:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Extract order 1 length, create array that will save white-light lightcurve (and errors): \n",
- "NT1 = spectra['order1']['flux'].shape[0]\n",
- "lc_order1 = np.zeros(NT1)\n",
- "lc_errors_order1 = np.zeros(NT1)\n",
- "\n",
- "# Same for order 2:\n",
- "NT2 = spectra['order2']['flux'].shape[0]\n",
- "lc_order2 = np.zeros(NT2)\n",
- "lc_errors_order2 = np.zeros(NT2)\n",
- "\n",
- "# Indexes of uncontaminated spectra for Order 1 and 2:\n",
- "idx_uncontaminated1 = np.where((spectra['order1']['x'] < 300)|(spectra['order1']['x'] > 1300))[0]\n",
- "idx_uncontaminated2 = np.where(spectra['order2']['x'] > 1300)[0]\n",
- "\n",
- "# Sum the fluxes and errors for each order. First, order 1:\n",
- "for i in range(NT1):\n",
- " lc_order1[i] = np.sum(spectra['order1']['flux'][i, idx_uncontaminated1])\n",
- " lc_errors_order1[i] = np.sqrt(np.sum(spectra['order1']['flux_errors'][i, idx_uncontaminated1]**2))\n",
- " \n",
- "# Now order 2:\n",
- "for i in range(NT2):\n",
- " lc_order2[i] = np.sum(spectra['order2']['flux'][i, idx_uncontaminated2])\n",
- " lc_errors_order2[i] = np.sqrt(np.sum(spectra['order2']['flux_errors'][i, idx_uncontaminated2]**2))\n",
- "\n",
- "# Save median-normalized lightcurves and errors:\n",
- "median_lc1, median_lc2 = np.median(lc_order1), np.median(lc_order2)\n",
- "spectra['order1']['white-light'] = lc_order1 / median_lc1\n",
- "spectra['order1']['white-light_errors'] = lc_errors_order1 / median_lc1\n",
- "spectra['order2']['white-light'] = lc_order2 / median_lc2\n",
- "spectra['order2']['white-light_errors'] = lc_errors_order2 / median_lc2\n",
- "# Write median errors in ppm:\n",
- "med_err_order1 = np.median(spectra['order1']['white-light_errors'])*1e6\n",
- "med_err_order2 = np.median(spectra['order2']['white-light_errors'])*1e6\n",
- "\n",
- "# Save variable with times in units of hours since beggining of observation:\n",
- "thours = (spectra['times'] - spectra['times'][0]) * 24\n",
- "\n",
- "# Plot lightcurve\n",
- "fig, ax = plt.subplots(figsize=(15, 5))\n",
- "ax.set_title('White-light lightcurve of HAT-P-1b NIRISS/SOSS observations')\n",
- "\n",
- "ax.errorbar(thours, spectra['order1']['white-light'], \n",
- " yerr=spectra['order1']['white-light_errors'], \n",
- " color='orangered', \n",
- " fmt='.', \n",
- " label=f'Order 1 ($\\sigma={round(med_err_order1, 1)}$ ppm)')\n",
- "\n",
- "ax.errorbar(thours, spectra['order2']['white-light'], \n",
- " yerr=spectra['order2']['white-light_errors'], \n",
- " color='cornflowerblue', \n",
- " fmt='.', \n",
- " label=f'Order 2 ($\\sigma={round(med_err_order2, 1)}$ ppm)')\n",
- "\n",
- "# Define legend, limits, labels:\n",
- "ax.legend()\n",
- "ax.set_xlim(np.min(thours), np.max(thours))\n",
- "ax.set_xlabel('Time since start of observations (hours)')\n",
- "ax.set_ylabel('Relative flux')\n",
- "\n",
- "# Plot inset to see errorbars:\n",
- "axins = ax.inset_axes([0.04, 0.5, 0.25, 0.3])\n",
- "x1, x2, y1, y2 = 1.5, 2.0, 0.9998, 1.0004\n",
- "axins.set_xlim(x1, x2)\n",
- "axins.set_ylim(y1, y2)\n",
- "axins.set_xticklabels('')\n",
- "axins.set_yticklabels('')\n",
- "\n",
- "axins.errorbar(thours,spectra['order1']['white-light'], \n",
- " yerr=spectra['order1']['white-light_errors'],\n",
- " color='orangered',\n",
- " fmt='.')\n",
- "\n",
- "axins.errorbar(thours,spectra['order2']['white-light'],\n",
- " yerr=spectra['order2']['white-light_errors'],\n",
- " color='cornflowerblue',\n",
- " fmt='.')\n",
- "\n",
- "mark_inset(ax, axins, loc1=1, loc2=2, linewidth=0.7, fc=\"None\", ec='k', alpha=0.4, clip_on=True, zorder=3)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Wow! Those look pretty good. There is a notable difference in the overall lightcurve shape here. Part of it can be attributed to limb-darkening (being Order 1 the order covering the reddest wavelengths, limb-darkening produces a more \"box-shaped\" lightcurve than for Order 2) --- but a portion of it could also be explained by the transit depths themselves.\n",
- "\n",
- "Let's now fit those transit lightcurves. We will fit them separately to see what we obtain from each order, compare and discuss the results. To perform the transit fitting, users can of course use any tool they want. In this notebook, we will use `juliet` (http://juliet.readthedocs.io/), which is a tool that allows to perform lightcurve fitting through Nested Sampling algorithms, so we can thoroughly explore the parameter space.\n",
- "\n",
- "We first import `juliet` and write the priors for the parameters of our fit. We will leave the period, $P$ (`P_p1`) of the orbit fixed. The parameters we will be fitting for are the time-of-transit center $t_0$ (`t0_p1`), two parameters that parametrize the planet-to-star radius ratio $R_p/R_*$ and the impact parameter of the orbit $b = (a/R_*)\\cos i$ in a unitary uniform plane, $r_1$ (`r1_p1`) and $r_2$ (`r2_p1` --- see Espinoza (2018) for details), two parameters that parametrize the limb-darkening through a square-root limb-darkening law, $q_1$ (`q1_SOSS`) and $q_2$ (`q2_SOSS` --- see Kipping (2013), the stellar density $\\rho_*$ (`rho`), a scaling normalization factor for the out-of-transit flux (`mflux_SOSS`) and an unknown standard deviation added in quadrature to the errorbars of our data, `sigma_w_SOSS`. On top of this, we fix some parameters in our fit as well: we don't fit for the eccentricity $e$ (`ecc_p1`) or argument of periastron passage $\\omega$ (`omega_p1`), as a single transit does not hold much information on these parameters. Similarly, we fix any possible dilution of the transit lightcurve by background sources (`mdilution_SOSS`) to 1, so we assume no dilution is in place. We take very wide priors for all the parameters that are left as free parameters in our fit except for the period, which we assume is well-known:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Name of the parameters to be fit:\n",
- "params = ['P_p1', 't0_p1', 'r1_p1', 'r2_p1', 'q1_SOSS', 'q2_SOSS', 'ecc_p1', 'omega_p1',\n",
- " 'rho', 'mdilution_SOSS', 'mflux_SOSS', 'sigma_w_SOSS']\n",
- "\n",
- "# Distributions:\n",
- "dists = ['fixed', 'normal', 'uniform', 'uniform', 'uniform', 'uniform', 'fixed', 'fixed',\n",
- " 'loguniform', 'fixed', 'normal', 'loguniform']\n",
- "\n",
- "# Hyperparameters\n",
- "hyperps = [4.4652998, [2459775.89,0.1], [0.,1], [0.,1.], [0., 1.], [0., 1.], 0.0, 90., \n",
- " [100., 10000.], 1.0, [0.,0.1], [0.1, 1000.]]\n",
- "\n",
- "priors = juliet.generate_priors(params, dists, hyperps)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Having defined the parameters, priors and the hyperparameters of those priors, we now fit both datasets. For this, we pass the data in a juliet-friendly format, and fit each transit individually:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "for order in ['order1', 'order2']:\n",
- " # Define times, fluxes and errors in a juliet-friendly format:\n",
- " times, fluxes, fluxes_error, norm_times = {}, {}, {}, {}\n",
- " times['SOSS'], fluxes['SOSS'], fluxes_error['SOSS'] = [spectra['times'],\n",
- " spectra[order]['white-light'],\n",
- " spectra[order]['white-light_errors']]\n",
- " \n",
- " # Load and fit dataset with juliet (save them to order*_juliet_results):\n",
- " spectra[order]['dataset'] = juliet.load(priors=priors, t_lc=times, y_lc=fluxes,\n",
- " yerr_lc=fluxes_error, ld_laws='squareroot',\n",
- " out_folder=order+'_juliet_results')\n",
- "\n",
- " spectra[order]['results'] = spectra[order]['dataset'].fit(use_dynesty=True, dynamic=True, dynesty_nthreads=4)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's see what the fits look like:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(15, 5)) \n",
- "\n",
- "for i in [1,2]:\n",
- " plt.subplot('22'+str(i))\n",
- " plt.title('Order ' + str(i) + ' (white) lightcurve')\n",
- " order = 'order' + str(i)\n",
- " \n",
- " # First, extract estimated additional errorbars form our fits:\n",
- " sigma_w = np.median(spectra[order]['results'].posteriors['posterior_samples']['sigma_w_SOSS'])\n",
- " spectra[order]['sigma_w'] = sigma_w\n",
- " \n",
- " # Extract estimated time-of-transit center:\n",
- " t0 = np.median(spectra[order]['results'].posteriors['posterior_samples']['t0_p1'])\n",
- " \n",
- " # Normalize times to plot by this:\n",
- " tnorm = (spectra['times'] - t0) * 24\n",
- " plt.errorbar(tnorm, spectra[order]['white-light'], \n",
- " yerr=np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w*1e-6)**2),\n",
- " label='Data')\n",
- " \n",
- " # Plot best-fit model on top:\n",
- " spectra[order]['model'] = spectra[order]['results'].lc.evaluate('SOSS')\n",
- " plt.plot(tnorm, spectra[order]['model'], color='black', label='Model')\n",
- " plt.xlim(np.min(tnorm), np.max(tnorm))\n",
- " plt.ylabel('Relative flux')\n",
- " \n",
- " # Residuals:\n",
- " plt.subplot('22' + str(i + 2))\n",
- " spectra[order]['residuals'] = spectra[order]['white-light'] - spectra[order]['model']\n",
- " \n",
- " plt.errorbar(tnorm, spectra[order]['residuals']*1e6,\n",
- " yerr=np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w*1e-6)**2)*1e6,\n",
- " fmt='o', alpha=0.3)\n",
- " \n",
- " plt.xlim(np.min(tnorm), np.max(tnorm))\n",
- " plt.ylim(-350, 350)\n",
- " average_total_errorbar = np.median(np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w * 1e-6)**2) * 1e6)\n",
- " plt.text(1, 220, r'$\\sigma_w = {0:.1f}$ ppm'.format(average_total_errorbar), fontsize=13)\n",
- " plt.ylabel('Residuals (ppm)')\n",
- " plt.xlabel('Time from mid-transit (hours)')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Those look pretty good! The precisions more or less match what is expected from NIRISS/SOSS white-light lightcurves. What about the posterior distribution of the parameters?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Names of the parameters to show in the corner plot:\n",
- "names = [r'$t_0 - Med[t_0]$ (s)','$R_p/R_s$', '$b = a \\cos i$', r'$\\rho_*$', r'$u_1$', r'$u_2$']\n",
- "\n",
- "# Retrieve posterior distributions of parameters for Order 1:\n",
- "Theta1 = np.zeros([len(spectra['order1']['results'].posteriors['posterior_samples']['t0_p1']), 6])\n",
- "Theta1[:,0] = spectra['order1']['results'].posteriors['posterior_samples']['t0_p1']\n",
- "\n",
- "# Convert r1 and r2 sampling scheme to rp/rs and b:\n",
- "b_1, rprs_1 = juliet.utils.reverse_bp(spectra['order1']['results'].posteriors['posterior_samples']['r1_p1'],\n",
- " spectra['order1']['results'].posteriors['posterior_samples']['r2_p1'], 0., 1)\n",
- "\n",
- "Theta1[:,1], Theta1[:,2] = rprs_1, b_1\n",
- "\n",
- "# Extract stellar density:\n",
- "Theta1[:,3] = spectra['order1']['results'].posteriors['posterior_samples']['rho']\n",
- "\n",
- "# Convert q1 and q2 sampling to u1 and u2:\n",
- "u1_1, u2_1 = juliet.utils.reverse_ld_coeffs('squareroot',\n",
- " spectra['order1']['results'].posteriors['posterior_samples']['q1_SOSS'],\n",
- " spectra['order1']['results'].posteriors['posterior_samples']['q2_SOSS'])\n",
- "\n",
- "Theta1[:,4], Theta1[:,5] = u1_1, u2_1\n",
- "\n",
- "# Do the same for Order 2:\n",
- "Theta2 = np.zeros([len(spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']), 6])\n",
- "Theta2[:,0] = spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']\n",
- "\n",
- "# Convert r1 and r2 sampling scheme to rp/rs and b:\n",
- "b_2, rprs_2 = juliet.utils.reverse_bp(spectra['order2']['results'].posteriors['posterior_samples']['r1_p1'],\n",
- " spectra['order2']['results'].posteriors['posterior_samples']['r2_p1'], 0., 1)\n",
- "\n",
- "Theta2[:,1], Theta2[:,2] = rprs_2, b_2\n",
- "\n",
- "# Extract stellar density:\n",
- "Theta2[:,3] = spectra['order2']['results'].posteriors['posterior_samples']['rho']\n",
- "\n",
- "# Convert q1 and q2 sampling to u1 and u2:\n",
- "u1_2, u2_2 = juliet.utils.reverse_ld_coeffs('squareroot',\n",
- " spectra['order2']['results'].posteriors['posterior_samples']['q1_SOSS'],\n",
- " spectra['order2']['results'].posteriors['posterior_samples']['q2_SOSS'])\n",
- "\n",
- "Theta2[:,4], Theta2[:,5] = u1_2, u2_2\n",
- "\n",
- "# Plot t0 minus combined median t0 for better plotting:\n",
- "median_t0 = np.median(np.append(Theta1[:,0], Theta2[:,0]))\n",
- "Theta1[:,0] = (Theta1[:,0] - median_t0) * 24 * 3600\n",
- "Theta2[:,0] = (Theta2[:,0] - median_t0) * 24 * 3600\n",
- "\n",
- "# Corner plot for redder order (Order 1)\n",
- "figure = corner.corner(Theta1, labels=names, color='orangered')\n",
- "\n",
- "# Same for order 2:\n",
- "corner.corner(Theta2, fig=figure, color='cornflowerblue')\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "All right! It seems as if all the posteriors are more or less consistent with each other. However, there is a clear separation in the limb-darkening coefficients plane (despite the average values of the coefficients being consistent) --- this is expected; the profiles are expected to be different for both orders given the different wavelength ranges they encompass.\n",
- "\n",
- "### A. Studying the residuals\n",
- "\n",
- "One might wonder if there is any structure in the residuals. This structure can give rise to signals that, if not accounted for, might led us to believe we have a precision that is much better that what the dataset actually has to offer. A classic approach to performing a quick check on the residuals is to see if, as you bin more datapoints, their rms decreases with the square-root of the number of datapoints. If the data is distributed as gaussian random noise, then one should see a $1/\\sqrt{N}$ decline in this plot, where $N$ is the number of datapoints in a given bin. This is an easy check to make in this case:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(15, 5))\n",
- "\n",
- "bin_sizes = np.arange(10, 1000, 10)\n",
- "for i in [1, 2]:\n",
- " order = 'order' + str(i)\n",
- " plt.subplot('12' + str(i))\n",
- " plt.title('Order ' + str(i) + ' residuals as a function of bin-size')\n",
- " \n",
- " rms = np.zeros(len(bin_sizes))\n",
- " for j in range(len(bin_sizes)):\n",
- " bin_size = bin_sizes[j]\n",
- " binned_times, binned_residuals, binned_errors = juliet.utils.bin_data(spectra['times'],\n",
- " spectra[order]['residuals'],\n",
- " bin_size)\n",
- " rms[j] = np.sqrt(np.var(binned_residuals)) * 1e6\n",
- " \n",
- " plt.plot(bin_sizes, rms, label='White-light lightcurve')\n",
- " plt.plot(bin_sizes, (np.sqrt(bin_sizes[0]) * rms[0]) / np.sqrt(bin_sizes), label='Expected ($1/\\sqrt{N}$)')\n",
- " plt.xscale('log')\n",
- " plt.ylabel('RMS (ppm)')\n",
- " plt.xlabel('Bin size')\n",
- " \n",
- "plt.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The plots look very promising, but it is hard to see if there is any particular frequency at which we should be paying attention to. What about the power spectrum of the residuals? This basically transforms the time-series to Fourier space where we can actually see if there is any evidence for residual signals at different time-scales/frequencies:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(15, 5)) \n",
- "\n",
- "# Define frequencies in hours. From more or less the duration of the observation (~1 over 10 hours) to the \n",
- "# time-sampling (~1 over 1 minute, so over 1/60 hours)\n",
- "frequency = np.linspace(1. / 10., 1. / (1. / 60.), 1000)\n",
- "for i in [1, 2]:\n",
- " order = 'order' + str(i)\n",
- " plt.subplot('12' + str(i))\n",
- " plt.title('Order ' + str(i) + ' Residuals Power Spectral Density (PSD)')\n",
- " \n",
- " ls = LombScargle((spectra['times'] - spectra['times'][0]) * 24, spectra[order]['residuals'])\n",
- " psd = ls.power(frequency)\n",
- " max_power = np.max(psd)\n",
- " max_freq = frequency[np.where(max_power == psd)[0]][0]\n",
- " fap = ls.false_alarm_probability(max_power) \n",
- " print('Maximum power: {0:.4f}, frequency: {1:.2f}, FAP: {2:.2f}%'.format(max_power, max_freq, fap * 100))\n",
- " \n",
- " plt.plot(frequency,psd)\n",
- " plt.ylabel('Power Spectral Density')\n",
- " plt.xlabel('Frequency (1/hr)')\n",
- " plt.xlim([1. / 10.,60.])"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This looks fairly good! There is apparently no clear peak in these power spectral densities, with peaks evenly distributed across all frequencies --- i.e., the noise pattern appears to be fairly \"white\". We thus move ahead with that hypothesis, and assume there is no residual signal and/or correlated noise in our fits left to mode.\n",
- "\n",
- "## 4. Fitting & analyzing the wavelength-dependent lightcurves\n",
- "\n",
- "One of the main products that ought to be analyzed when observing transits with JWST are the wavelength-dependent lightcurves. Fitting those lets us obtain the final product of our observations: its transmission spectrum. To retrieve this, let us fit the wavelength dependent lightcurves for each order. Unlike as with HST and ground-based low resolution lightcurves, JWST's precision is so impressive in cases like this that we won't need to bin the lightcurves --- we can work at the (extracted) \"pixel\"-level!\n",
- "\n",
- "First, let's extract those lightcurves, and lets plot them in a slightly different way than above: let's create a matrix, where the rows are the wavelengths, and the columns are the time-stamps. The values of this matrix will be the relative fluxes:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Let's iterate then to get all the lightcurves:\n",
- "matrices = {}\n",
- "ntimes = len(spectra['times'])\n",
- "for order in ['order1', 'order2']:\n",
- " \n",
- " # Create the matrix for each order\n",
- " nwavs = len(spectra[order]['w'])\n",
- " matrices[order] = np.zeros([nwavs, ntimes])\n",
- " for i in range(nwavs):\n",
- " spectra[order]['wbin'+str(i)] = {}\n",
- " \n",
- " # Get central wavelength and limits:\n",
- " spectra[order]['wbin' + str(i)]['w'] = spectra[order]['w'][i]\n",
- " if i == 0:\n",
- " dwav_min = np.abs((spectra[order]['w'][i+1] - spectra[order]['w'][i]) / 2.)\n",
- " dwav_max = dwav_min\n",
- " elif i == nwavs-1:\n",
- " dwav_min = np.abs((spectra[order]['w'][i] - spectra[order]['w'][i-1]) / 2.)\n",
- " dwav_max = dwav_min\n",
- " else:\n",
- " dwav_min = np.abs(spectra[order]['w'][i] - spectra[order]['w'][i-1]) / 2.\n",
- " dwav_max = np.abs(spectra[order]['w'][i+1] - spectra[order]['w'][i]) / 2.\n",
- " \n",
- " # Extract lightcurve and errors:\n",
- " spectra[order]['wbin' + str(i)]['lc'] = spectra[order]['flux'][:,i]\n",
- " spectra[order]['wbin' + str(i)]['lc_errors'] = spectra[order]['flux_errors'][:,i]\n",
- " \n",
- " # Median normalize the extracted fluxes:\n",
- " median_flux = np.median(spectra[order]['wbin' + str(i)]['lc'][:100])\n",
- " spectra[order]['wbin' + str(i)]['lc'] = spectra[order]['wbin' + str(i)]['lc'] / median_flux\n",
- " spectra[order]['wbin' + str(i)]['lc_errors'] = spectra[order]['wbin' + str(i)]['lc_errors'] / median_flux\n",
- " matrices[order][i,:] = spectra[order]['wbin' + str(i)]['lc']\n",
- "\n",
- "times_hours = (spectra['times'] - spectra['times'][0]) * 24\n",
- "\n",
- "# Let's plot them:\n",
- "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 10))\n",
- "\n",
- "# First, order 1:\n",
- "plt.title('Order 1 lightcurves')\n",
- "plt.xlabel('Time since start of observation')\n",
- "plt.ylabel('Wavelength')\n",
- "im1 = ax1.imshow(matrices['order1'], interpolation='none')\n",
- "im1.set_clim(0.985, 1.01)\n",
- "\n",
- "# Y axis:\n",
- "ticks = np.arange(0, len(spectra['order1']['w']), 250) \n",
- "ticklabels = [\"{:0.2f}\".format(spectra['order1']['w'][i]) for i in ticks]\n",
- "ax1.set_yticks(ticks)\n",
- "ax1.set_yticklabels(ticklabels)\n",
- "ax1.set_ylabel(r'Wavelength ($\\mu$m)')\n",
- "\n",
- "# X axis:\n",
- "ticks = np.arange(0, ntimes, 100) \n",
- "ticklabels = [\"{:0.2f}\".format(times_hours[i]) for i in ticks]\n",
- "ax1.set_xticks(ticks)\n",
- "ax1.set_xticklabels(ticklabels)\n",
- "ax1.set_xlabel('Hours since start of observations')\n",
- "\n",
- "# Repeat for order 2:\n",
- "plt.title('Order 2 lightcurves')\n",
- "plt.xlabel('Time since start of observation')\n",
- "plt.ylabel('Wavelength')\n",
- "im2 = ax2.imshow(matrices['order2'],interpolation='none')\n",
- "im2.set_clim(0.985, 1.01)\n",
- "\n",
- "# Y axis:\n",
- "ticks = np.arange(0, len(spectra['order2']['w']), 250) \n",
- "ticklabels = [\"{:0.2f}\".format(spectra['order2']['w'][i]) for i in ticks]\n",
- "ax2.set_yticks(ticks)\n",
- "ax2.set_yticklabels(ticklabels)\n",
- "ax2.set_ylabel(r'Wavelength ($\\mu$m)')\n",
- "\n",
- "# X axis:\n",
- "ticks = np.arange(0, ntimes, 100) \n",
- "ticklabels = [\"{:0.2f}\".format(times_hours[i]) for i in ticks]\n",
- "ax2.set_xticks(ticks)\n",
- "ax2.set_xticklabels(ticklabels)\n",
- "ax2.set_xlabel('Hours since start of observations')\n",
- "fig.colorbar(im1, shrink=0.4, label='Relative flux')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Nice! As can be seen, this plot comes very handy. First of all, note how the transit is marked in the dark regions at all wavelengths where it is expected (i.e., where we saw them in the white-light lightcurves --- time of transit around 3.5 hours since start of observations). One can actually see how the lightcurves get noisier for longer wavelengths, which makes sense with the image plots (the traces fade to the left of all of our image/trace plots). On top of this, we can see that some lightcurves seem to be a bit noisier/corrupted than usual for Order 1 at the shorter wavelengths. We won't worry too much about those here, but this is a very important application of this kind of plots, which allow us to visually see outlier lightcurves in one simple image.\n",
- "\n",
- "Let's now fit those lightcurves. To this end, we once again use `juliet` --- however, we fix the ephemerides and orbital parameters ($P$, $t_0$, $a/R_*$ and $b$) to those found in the white-light analysis, as they should be wavelength-independent. To this end, we combine the posterior parameters from Orders 1 and 2. In our fits, thus, we only leave the transit depth, the out-of-transit flux and the \"jitter\" as free parameters. \n",
- "\n",
- "Let's first combine the orbital parameters using the posterior distributions of our white-light fits:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "t0_median = np.median(np.append(spectra['order1']['results'].posteriors['posterior_samples']['t0_p1'],\n",
- " spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']))\n",
- "\n",
- "b_median = np.median(np.append(b_1, b_2))\n",
- "rho_median = np.median(np.append(spectra['order1']['results'].posteriors['posterior_samples']['rho'],\n",
- " spectra['order2']['results'].posteriors['posterior_samples']['rho']))\n",
- "\n",
- "print(t0_median, b_median, rho_median)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And define the priors of our fit using those:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Name of the parameters to be fit:\n",
- "params = ['P_p1', 't0_p1', 'rho', 'b_p1', 'q1_SOSS', 'q2_SOSS', 'ecc_p1', 'omega_p1',\n",
- " 'p_p1', 'mdilution_SOSS', 'mflux_SOSS', 'sigma_w_SOSS']\n",
- "\n",
- "# Distributions:\n",
- "dists = ['fixed', 'fixed', 'fixed', 'fixed', 'uniform', 'uniform', 'fixed', 'fixed',\n",
- " 'uniform', 'fixed', 'normal', 'loguniform']\n",
- "\n",
- "# Hyperparameters:\n",
- "hyperps = [4.4652997979, t0_median, rho_median, b_median, [0, 1], [0, 1], 0.0, 90.,\n",
- " [0., 0.2], 1.0, [0., 0.1], [0.1, 1000.]]\n",
- "\n",
- "priors = juliet.generate_priors(params, dists, hyperps)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And now we ingest those values into our priors, and fit the wavelength-dependent lightcurves. Note that **performing the fitting below with NIRISS/SOSS' native resolution can take _days_** (we need to run almost 3,000 `juliet` runs). Therefore, it is important to consider parallelizing these runs/fits in a real application. \n",
- "\n",
- "For simplicity (and speed) in rendering this notebook, we have pickled and uploaded the results of these fits --- these were downloaded in the second cell of this notebook above. By default, thus, this notebook will read these downloaded results directly. If you wish to run the fits on your own, however, you can set the `use_downloaded_results` variable below to `False`:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "use_downloaded_results = True"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "If the fits are ran, we create a matrix similar to the above, but now for the residuals of the wavelength-dependent light curves. If results were downloaded, these have been already generated, so we just extract them from these products:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# If use_downloaded_results is True, read results from downloaded file. If False, run fits and save results.\n",
- "residual_matrices = {}\n",
- "if use_downloaded_results:\n",
- " transpec_results = pickle.load(open(transit_fits_results_path, 'rb'))\n",
- " for order in ['order1', 'order2']:\n",
- " residual_matrices[order] = transpec_results[order]['transmission_spectrum']['residual_matrix']\n",
- " spectra[order]['transmission_spectrum'] = {}\n",
- " spectra[order]['transmission_spectrum']['wavelengths'] = transpec_results[order]['transmission_spectrum']['wavelengths']\n",
- " spectra[order]['transmission_spectrum']['depths'] = transpec_results[order]['transmission_spectrum']['depths']\n",
- " spectra[order]['transmission_spectrum']['errors'] = transpec_results[order]['transmission_spectrum']['errors']\n",
- " \n",
- "else:\n",
- " for order in ['order1', 'order2']: \n",
- " spectra[order]['transmission_spectrum'] = {}\n",
- " spectra[order]['transmission_spectrum']['wavelengths'] = np.array([])\n",
- " spectra[order]['transmission_spectrum']['depths'] = np.array([])\n",
- " spectra[order]['transmission_spectrum']['errors']= np.array([])\n",
- " \n",
- " nwavs = len(spectra[order]['w'])\n",
- " residual_matrices[order] = np.zeros([nwavs, ntimes])\n",
- " for i in range(nwavs):\n",
- " \n",
- " # Save dataset in juliet format:\n",
- " times, fluxes, fluxes_error, norm_times = {},{},{}, {}\n",
- " times['SOSS'], fluxes['SOSS'], fluxes_error['SOSS'] = spectra['times'],\\\n",
- " spectra[order]['wbin' + str(i)]['lc'],\\\n",
- " spectra[order]['wbin' + str(i)]['lc_errors']\n",
- " \n",
- " # Load and fit dataset with juliet (save them to order*_juliet_results):\n",
- " spectra[order]['dataset'] = juliet.load(priors=priors, t_lc=times, y_lc=fluxes,\n",
- " yerr_lc=fluxes_error, ld_laws='squareroot',\n",
- " out_folder=order + '_wbin_' + str(i) + '_juliet_results')\n",
- "\n",
- " spectra[order]['results'] = spectra[order]['dataset'].fit()\n",
- "\n",
- " # Save transmission spectrum for plotting later:\n",
- " spectra[order]['transmission_spectrum']['wavelengths'] = np.append(spectra[order]['transmission_spectrum']['wavelengths'],\n",
- " spectra[order]['wbin'+str(i)]['w'])\n",
- " \n",
- " depths = ((spectra[order]['results'].posteriors['posterior_samples']['p_p1'])**2) * 1e6\n",
- " \n",
- " spectra[order]['transmission_spectrum']['depths'] = np.append(spectra[order]['transmission_spectrum']['depths'],\n",
- " np.median(depths))\n",
- " spectra[order]['transmission_spectrum']['errors'] = np.append(spectra[order]['transmission_spectrum']['errors'],\n",
- " np.sqrt(np.var(depths)))\n",
- " \n",
- " # Save residuals:\n",
- " residual_matrices[order][i,:] = (spectra[order]['wbin'+str(i)]['lc'] - \\\n",
- " spectra[order]['results'].lc.evaluate('SOSS')) * 1e6\n",
- " \n",
- "# Plot fits:\n",
- "fig,(ax1,ax2) = plt.subplots(1, 2, figsize=(15, 10))\n",
- "\n",
- "# First, order 1:\n",
- "plt.title('Order 1 lightcurve residuals')\n",
- "plt.xlabel('Time since start of observation')\n",
- "plt.ylabel('Wavelength')\n",
- "im1 = ax1.imshow(residual_matrices['order1'], interpolation='none')\n",
- "im1.set_clim(-1000, 1000)\n",
- "\n",
- "# Y axis:\n",
- "ticks = np.arange(0, len(spectra['order1']['w']), 250) \n",
- "ticklabels = [\"{:0.2f}\".format(spectra['order1']['w'][i]) for i in ticks]\n",
- "ax1.set_yticks(ticks)\n",
- "ax1.set_yticklabels(ticklabels)\n",
- "ax1.set_ylabel(r'Wavelength ($\\mu$m)')\n",
- "\n",
- "# X axis:\n",
- "ticks = np.arange(0, ntimes, 100) \n",
- "ticklabels = [\"{:0.2f}\".format(times_hours[i]) for i in ticks]\n",
- "ax1.set_xticks(ticks)\n",
- "ax1.set_xticklabels(ticklabels)\n",
- "ax1.set_xlabel('Hours since start of observations')\n",
- "\n",
- "# Repeat for order 2:\n",
- "plt.title('Order 2 lightcurve residuals')\n",
- "plt.xlabel('Time since start of observation')\n",
- "plt.ylabel('Wavelength')\n",
- "im2 = ax2.imshow(residual_matrices['order2'], interpolation='none')\n",
- "im2.set_clim(-1000, 1000)\n",
- "\n",
- "# Y axis:\n",
- "ticks = np.arange(0, len(spectra['order2']['w']), 250) \n",
- "ticklabels = [\"{:0.2f}\".format(spectra['order2']['w'][i]) for i in ticks]\n",
- "ax2.set_yticks(ticks)\n",
- "ax2.set_yticklabels(ticklabels)\n",
- "ax2.set_ylabel(r'Wavelength ($\\mu$m)')\n",
- "\n",
- "# X axis:\n",
- "ticks = np.arange(0, ntimes, 100) \n",
- "ticklabels = [\"{:0.2f}\".format(times_hours[i]) for i in ticks]\n",
- "ax2.set_xticks(ticks)\n",
- "ax2.set_xticklabels(ticklabels)\n",
- "ax2.set_xlabel('Hours since start of observations')\n",
- "fig.colorbar(im1, shrink=0.4, label='Residuals (ppm)')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "All right! That looks _great_. The residual image seems to be just random noise from the looks of it!\n",
- "\n",
- "Let's now see how our transmission spectrum looks. The model transmission spectrum we are using is taken from the ExoCTK website (https://exoctk.stsci.edu/generic), with properties that match that of HAT-P-1b. We'll plot the original, non-binned spectrum and also plot the binned spectrum on top:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "plt.figure(figsize=(15, 5)) \n",
- "\n",
- "nbin = 21\n",
- "\n",
- "# Plot order 1 transit spectra:\n",
- "plt.errorbar(spectra['order1']['transmission_spectrum']['wavelengths'],\n",
- " spectra['order1']['transmission_spectrum']['depths'],\n",
- " spectra['order1']['transmission_spectrum']['errors'],\n",
- " fmt='.', mfc='orangered', mec='orangered', ecolor='orangered',\n",
- " ms=8, elinewidth=1, alpha=0.05, label=None)\n",
- "# Bin:\n",
- "wb1, db1, db1_err = juliet.utils.bin_data(spectra['order1']['transmission_spectrum']['wavelengths'],\n",
- " spectra['order1']['transmission_spectrum']['depths'],\n",
- " nbin)\n",
- "plt.errorbar(wb1, db1, db1_err,\n",
- " fmt='o', mfc='white', \n",
- " mec='orangered', ecolor='orangered',\n",
- " ms=8, elinewidth=1, label='NIRISS/SOSS Order 1')\n",
- "\n",
- "# Same, order 2:\n",
- "plt.errorbar(spectra['order2']['transmission_spectrum']['wavelengths'],\n",
- " spectra['order2']['transmission_spectrum']['depths'],\n",
- " spectra['order2']['transmission_spectrum']['errors'],\n",
- " fmt='.', mfc='cornflowerblue', mec='cornflowerblue',\n",
- " ecolor='cornflowerblue', ms=8, elinewidth=1, alpha=0.05, label = None)\n",
- "\n",
- "wb2, db2, db2_err = juliet.utils.bin_data(spectra['order2']['transmission_spectrum']['wavelengths'],\n",
- " spectra['order2']['transmission_spectrum']['depths'],\n",
- " nbin)\n",
- "plt.errorbar(wb2, db2, db2_err,\n",
- " fmt='o', mfc='white', \n",
- " mec='cornflowerblue', ecolor='cornflowerblue',\n",
- " ms=8, elinewidth=1, label='NIRISS/SOSS Order 2')\n",
- "\n",
- "# Plot the transit spectrum model:\n",
- "wavelength_model, transit_depth_model = np.loadtxt(transit_model_path, unpack=True)\n",
- "plt.plot(wavelength_model, transit_depth_model*1e6, color='black', lw=1, label='Model spectrum')\n",
- "\n",
- "# Mark some features in the plot:\n",
- "plt.text(0.6, 14800, 'Na', fontsize=13)\n",
- "plt.text(0.78, 14600, 'K', fontsize=13)\n",
- "plt.text(1.15, 14300, 'H$_2$O', fontsize=13)\n",
- "plt.text(1.4, 14500, 'H$_2$O', fontsize=13)\n",
- "plt.text(1.85, 14550, 'H$_2$O', fontsize=13)\n",
- "plt.text(2.5, 14700, 'H$_2$O', fontsize=13)\n",
- "\n",
- "# Define plot limits:\n",
- "plt.xlim(0.54, 2.85)\n",
- "plt.ylim(14000, 15000)\n",
- "plt.ylabel('Transit depth (ppm)')\n",
- "plt.xlabel('Wavelength ($\\mu$m)')\n",
- "plt.legend()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Interesting! There are clear hints of the Na and K features in the blue-optical bandpass of Order 2. The Order 1 transmission spectrum, on the other hand, shows a very clear detection of several water bands in the entire NIRISS/SOSS range. There seems to be some problems at around ~1.75-2 microns where the contamination overlap between order 1 and 2 is largest, which is expected given our simple extraction routine used above.\n",
- "\n",
- "Overall, these results showcase the power of NIRISS/SOSS observations for targeting both optical and near-infrared features in the transit spectrum of HAT-P-1b."
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.8.10"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/notebooks/soss-transit-spectroscopy/pre-requirements.txt b/notebooks/soss-transit-spectroscopy/pre-requirements.txt
deleted file mode 100644
index 83471286d..000000000
--- a/notebooks/soss-transit-spectroscopy/pre-requirements.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-numpy==1.18.5
-pybind11==2.5.0
-radvel==1.4.0
diff --git a/notebooks/soss-transit-spectroscopy/requirements.txt b/notebooks/soss-transit-spectroscopy/requirements.txt
deleted file mode 100644
index ed85c0779..000000000
--- a/notebooks/soss-transit-spectroscopy/requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-scipy >= 1.3.1
-matplotlib >= 3.1.3
-seaborn >= 0.9.0
-astropy >= 4.0.1
-jwst >= 0.16.1
-corner >= 2.0.1
-juliet >= 2.0.25