diff --git a/pipeline_products_session/jwebbinar-intro-to-resources.pptx b/pipeline_products_session/jwebbinar-intro-to-resources.pptx
deleted file mode 100644
index b46d81f..0000000
Binary files a/pipeline_products_session/jwebbinar-intro-to-resources.pptx and /dev/null differ
diff --git a/pipeline_products_session/jwst-data-products-part1-live.ipynb b/pipeline_products_session/jwst-data-products-part1-live.ipynb
index 07f3127..c2ccc51 100644
--- a/pipeline_products_session/jwst-data-products-part1-live.ipynb
+++ b/pipeline_products_session/jwst-data-products-part1-live.ipynb
@@ -7,26 +7,43 @@
"\n",
"# JWST Data Products: Uncalibrated Data \n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: March 24, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: April 26, 2021.\n",
"\n",
+ "
\n",
+ "
Notebook Goals
\n",
+ "
Using an uncalibrated (raw) JWST exposure, we will:
\n",
+ "
\n",
+ " - Begin exploring JWST data formats and meta data using Astropy tools
\n",
+ " - Introduce JWST data models and use them to explore our data
\n",
+ " - Bonus information: Other uses for the data models
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Examining an exposure with astropy](#astro)\n",
+ " 2. [Data in MAST](#mast)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Examining an exposure with astropy](#astro)\n",
" 1. [Format](#astro-format)\n",
" 2. [Metadata](#astro-meta)\n",
" 3. [Vizualizing data](#astro-viz)\n",
- "5. [A different perspective: JWST data models](#model) \n",
+ " 4. [Exercise 1](#exercise-1)\n",
+ "4. [A different perspective: JWST data models](#model) \n",
" 1. [Current models](#list)\n",
- " 1. [Format](#model-format)\n",
- " 2. [Metadata](#model-meta)\n",
- "6. [Other ways to use the models](#use)\n",
+ " 2. [Format](#model-format)\n",
+ " 3. [Metadata](#model-meta)\n",
+ " 4. [Exercise 2](#exercise-2) \n",
+ "5. [Bonus: Other ways to use the models](#use)\n",
" 1. [Create data from scratch](#scratch)\n",
" 2. [Create data from a file](#file)\n",
- "7. [Simulations](#simulations)\n",
- "8. [Exercise](#exercise)"
+ " 3. [Simulations](#simulations)\n",
+ "6. [Exercise solutions](#solutions) "
]
},
{
@@ -36,7 +53,7 @@
"1.-Introduction \n",
"------------------\n",
"\n",
- "Welcome to the first module about JWST data products! JWST is a complex observatory with four instruments and many modes, so there is a lot to learn about about the different types of data and their formats, and the tools available to help observers examine and analyze their data. In this session, we will examine JWST data products and how they change as they go through the pipeline. We will start with uncalibrated data and proceed through the processing stages of the JWST data calibration pipeline (hereafter, the pipeline) in separate modules, highlighting important notes along the way. Detailed information about how to run the pipeline will be saved for the next couple of JWebbinars.\n",
+ "Welcome to the first module about JWST data products! JWST is a complex observatory with four instruments and many modes, so there is a lot to learn about about the different types of data and their formats, and the tools available to help observers examine and analyze their data. In this JWebbinar, we will examine JWST data products and how they change as they go through the pipeline. We will start with uncalibrated data and proceed through the processing stages of the JWST data calibration pipeline (hereafter, the pipeline) in separate modules, highlighting important notes along the way. Detailed information about how to run the pipeline will be saved for the next couple of JWebbinars.\n",
"\n",
"Most JWST science data products are in FITS format, which should be familiar to observers. However, there are ancillary input and output files for the pipeline that are not; there are JSON files (used to associate different observations), ASDF files (typically pipeline configuration files), and ECSV files (for ASCII table data, such as catalogs). \n",
"\n",
@@ -45,11 +62,26 @@
"### A.-Resources\n",
"\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -64,8 +96,8 @@
"import os\n",
"import inspect\n",
"\n",
- "# To get data from Box\n",
- "import requests\n",
+ "# Image loader\n",
+ "from IPython.display import Image\n",
"\n",
"# Numpy library:\n",
"import numpy as np\n",
@@ -100,7 +132,6 @@
"# Use this version for non-interactive plots (easier scrolling of the notebook)\n",
"%matplotlib inline\n",
"\n",
- "\n",
"# These gymnastics are needed to make the sizes of the figures\n",
"# be the same in both the inline and notebook versions\n",
"%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}\n",
@@ -122,48 +153,14 @@
"metadata": {},
"outputs": [],
"source": [
- "def download_file(url):\n",
- " \"\"\"Download into the current working directory the\n",
- " file from Box given the direct URL\n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " url : str\n",
- " URL to the file to be downloaded\n",
- " \n",
- " Returns\n",
- " -------\n",
- " download_filename : str\n",
- " Name of the downloaded file\n",
- " \"\"\"\n",
- " response = requests.get(url, stream=True)\n",
- " if response.status_code != 200:\n",
- " raise RuntimeError(\"Wrong URL - {}\".format(url))\n",
- " download_filename = response.headers['Content-Disposition'].split('\"')[1]\n",
- " with open(download_filename, 'wb') as f:\n",
- " for chunk in response.iter_content(chunk_size=1024):\n",
- " if chunk:\n",
- " f.write(chunk)\n",
- " return download_filename"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=4000, vmax=12000)\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -182,18 +179,13 @@
"metadata": {},
"outputs": [],
"source": [
- "def plot_ramp(groups, signal, xpixel=None, ypixel=None, title=None):\n",
+ "def plot_ramp(groups, signal, title=None):\n",
" ''' Function to generate the ramp for pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " if xpixel and ypixel:\n",
- " plt.plot(groups, signal, marker='o', label='Pixel ('+str(xpixel)+','+str(ypixel)+')') \n",
- " plt.legend(loc=2)\n",
- "\n",
- " else:\n",
- " plt.plot(groups, signal, marker='o')\n",
+ " plt.plot(groups, signal, marker='o')\n",
" \n",
" plt.xlabel('Groups')\n",
" plt.ylabel('Signal (DN)')\n",
@@ -215,36 +207,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use an uncalibrated NIRCam simulated imaging exposure that is stored in Box. For the exercise, we won't tell you what it is. You have to figure it out yourself! Let's grab the exposures:"
@@ -260,11 +223,20 @@
"source": [
"# Data for the notebook\n",
"uncal_obs_link = \"https://stsci.box.com/shared/static/mpbrc3lszdjif6kpcw1acol00e0mm2zh.fits\"\n",
- "uncal_obs = download_file(uncal_obs_link)\n",
+ "uncal_obs = \"example_nircam_imaging_uncal.fits\"\n",
+ "demo_file = download_file(uncal_obs_link+uncal_obs)\n",
"\n",
- "# Data for the exercise \n",
+ "# Data for the exercise \n",
"exercise_obs_link = \"https://stsci.box.com/shared/static/l1aih8rmwbtzyupv8hsl0adfa36why30.fits\"\n",
- "exercise_obs = download_file(exercise_obs_link) "
+ "exercise_obs = \"example_exercise_uncal.fits\"\n",
+ "demo_ex_file = download_file(exercise_obs_link+exercise_obs)\n",
+ "\n",
+ "# Save the files so that we can use them later\n",
+ "with fits.open(demo_file, ignore_missing_end=True) as f:\n",
+ " f.writeto(uncal_obs, overwrite=True)\n",
+ " \n",
+ "with fits.open(demo_ex_file, ignore_missing_end=True) as f:\n",
+ " f.writeto(exercise_obs, overwrite=True) "
]
},
{
@@ -278,23 +250,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Examining an exposure with astropy\n",
+ "3.-Examining an exposure with astropy\n",
"------------------\n",
"\n",
"Many of you may be familiar with using [astropy](https://docs.astropy.org/en/stable/) to examine data. Here, we will take a look at the format and headers using standard ```astropy``` tools. \n",
"\n",
"### A.-Format\n",
"\n",
- "Below, we see the typical extensions in a raw JWST data file. All data related to the product are contained in one or more FITS IMAGE or BINTABLE extensions, and the header of each extension may contain keywords that are uniquely related to that extension.\n",
- "\n",
- "* PRIMARY: The primary Header Data Unit (HDU) only contains header information, in the form of keyword records, with an empty data array (indicated by the occurence of NAXIS=0 in the primary header. Meta data that pertains to the entire product is stored in keywords in the primary header. Meta data related to specific extensions (see below) is stored in keywords in the headers of each extension.\n",
- "* SCI: 4-D data array containing the raw pixel values. The first two dimensions are equal to the size of the detector readout, with the data from multiple groups (NGROUPS) within each integration stored along the 3rd axis, and the multiple integrations (NINTS) stored along the 4th axis.\n",
- "* ZEROFRAME: 3-D data array containing the pixel values of the zero-frame for each integration in the exposure, where each plane of the cube corresponds to a given integration. Only appears if the zero-frame data were requested to be downlinked separately.\n",
- "* GROUP: A table of meta data for some (or all) of the data groups.\n",
- "* INT_TIMES: A table of begining, middle, and end time stamps for each integration in the exposure.\n",
- "* ADSF: The data model meta data.\n",
- "\n",
- "Additional extensions can be included for certain instruments and readout types. The [JWST software readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/science_products.html) contains the most up-to-date information about JWST formats. "
+ "Below, we see the typical extensions in a raw JWST data file. All data related to the product are contained in one or more FITS IMAGE or BINTABLE extensions, and the header of each extension may contain keywords that are uniquely related to that extension."
]
},
{
@@ -303,7 +266,23 @@
"metadata": {},
"outputs": [],
"source": [
- "# Let's take a high level look at our uncalibrated file \n"
+ "# Let's take a high level look at our uncalibrated file with .info()\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So what you see above is:\n",
+ "\n",
+ "* PRIMARY: The primary Header Data Unit (HDU) only contains header information, in the form of keyword records, with an empty data array (indicated by the occurence of NAXIS=0 in the primary header. Meta data that pertains to the entire product is stored in keywords in the primary header. Meta data related to specific extensions (see below) is stored in keywords in the headers of each extension.\n",
+ "* SCI: 4-D data array containing the raw pixel values. The first two dimensions are equal to the size of the detector readout, with the data from multiple groups (NGROUPS) within each integration stored along the 3rd axis, and the multiple integrations (NINTS) stored along the 4th axis.\n",
+ "* ZEROFRAME: 3-D data array containing the pixel values of the zero-frame for each integration in the exposure, where each plane of the cube corresponds to a given integration. Only appears if the zero-frame data were requested to be downlinked separately.\n",
+ "* GROUP: A table of meta data for some (or all) of the data groups.\n",
+ "* ADSF: The data model meta data. This extension can be read using The Advanced Scientific Data Format (ASDF), which is a next-generation format for scientific data. ASDF is a tool for reading and writing ASDF files. More information about the ASDF file standard is in the [ASDF software readthedocs](https://asdf.readthedocs.io/en/stable/).\n",
+ "* (INT_TIMES): You may also see a table of begining, middle, and end time stamps for each integration in the exposure.\n",
+ "\n",
+ "Additional extensions can be included for certain instruments and readout types. The [JWST software readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/science_products.html) contains the most up-to-date information about JWST formats. "
]
},
{
@@ -335,7 +314,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The science data shape here shows the number of integrations, groups, rows (pixels), and columns (pixels), which reflects the up-the-ramp readout (also referred to as MULTIACCUM) standardized readout sampling for all JWST detectors (read more in the [JWST User Documentation](https://jwst-docs.stsci.edu/understanding-exposure-times)). We'll talk about this more in the following sections. For now, let's look at the associated headers and other metadata. "
+ "The science data shape here shows the number of integrations, groups, rows (pixels), and columns (pixels), which reflects the up-the-ramp readout (also referred to as MULTIACCUM) standardized readout sampling for all JWST detectors (read more in the [JWST User Documentation](https://jwst-docs.stsci.edu/understanding-exposure-times)). Let's look at the associated headers and other metadata. "
]
},
{
@@ -364,34 +343,19 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# What's the observation ID, instrument, exposure type, detector? \n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
+ "metadata": {
+ "scrolled": true
+ },
"outputs": [],
"source": [
- "# What about the data dimensions? Integrations, groups, xsize, ysize?\n"
+ "# Print all the primary headers \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Additional metadata is stored in the ASDF extension. This extension can be read using The Advanced Scientific Data Format (ASDF), which is a next-generation format for scientific data. ASDF is a tool for reading and writing ASDF files. More information about the ASDF file standard is in the [ASDF software readthedocs](https://asdf.readthedocs.io/en/stable/). The format has the following features:\n",
- "\n",
- "* A hierarchical, human-readable metadata format (implemented using YAML)\n",
- "* Numerical arrays are stored as binary data blocks which can be memory mapped. Data blocks can optionally be compressed.\n",
- "* The structure of the data can be automatically validated using schemas (implemented using JSON Schema)\n",
- "* Native Python data types (numerical types, strings, dicts, lists) are serialized automatically\n",
- "* ASDF can be extended to serialize custom data types\n",
- "\n",
- "Right now, you don't need to worry about ASDF too much. We'll talk about it more when we discuss configuration files and accessing the WCS information in the following modules. Below, we provide a simple example of how to access the ASDF extension:"
+ "Search for FITS headers with the wildcard asterisk:"
]
},
{
@@ -400,8 +364,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Grab the ASDF extension data and header (use: asdf_data, asdf_metadata)\n",
- " "
+ "# Try finding all headers with \"OBS\" in the name\n"
]
},
{
@@ -409,14 +372,18 @@
"execution_count": null,
"metadata": {},
"outputs": [],
- "source": []
+ "source": [
+ "# What's the observation ID, instrument, exposure type?\n"
+ ]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
- "source": []
+ "source": [
+ "# What about the data dimensions? Integrations, groups, xsize, ysize?\n"
+ ]
},
{
"cell_type": "markdown",
@@ -429,9 +396,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In the previous section, we mentioned [up-the-ramp sampling](https://jwst-docs.stsci.edu/understanding-exposure-times) for IR detectors. During an integration, the detectors accumulate charge while being read out multiple times following predefined readout patterns for the different instruments. The readout process is non-destructive, leaving charge unaffected and in place (charge is not transferred between pixels as in CCDs). After each integration, the pixels are read out a final time and then reset, releasing their charge. \n",
- "\n",
- "Multiple non-destructive *frames* are averaged into a *group*, depending on the readout pattern selected. Breaking exposures into multiple *integrations* is most useful for bright sources that would saturate in longer integrations. \n",
+ "If you remember from the introductory slides, we mentioned [up-the-ramp sampling](https://jwst-docs.stsci.edu/understanding-exposure-times) for IR detectors. Multiple non-destructive *frames* are averaged into a *group*, depending on the readout pattern selected. Exposures are broken up into multiple *integrations*, which is useful for sources that would saturate in longer integrations. \n",
"\n",
"As such, the components of each up-the-ramp exposure are: \n",
"* NINTS: number of integrations per exposure.\n",
@@ -443,7 +408,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's select one integration for a particular pixel and examine the ramp. **Note**: this is uncalibrated data, so the detector effects are still present and the signal in each group will vary due to bias drift, reference pixel corrections, etc. not being performed yet. "
+ "Let's select one **integration** for a particular pixel and examine the ramp, and then one **group** to look at the detector image. *Note*: this is uncalibrated data, so the detector effects are still present and the signal in each group will vary due to bias drift, reference pixel corrections, etc. not being performed yet. "
]
},
{
@@ -477,7 +442,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also visualize the full NIRCam array for the last group in our integration, below. Again, this is a raw exposure, so none of the detector effects have been removed. The four amplifiers of the detector are visible, along with other features (e.g., an epoxy void region). "
+ "Next, we can visualize the full NIRCam array for the group we selected above. Again, this is a raw exposure, so none of the detector effects have been removed. The four amplifiers of the detector are visible, along with other features (e.g., an epoxy void region). "
]
},
{
@@ -489,6 +454,50 @@
"# Create an image of one integration and one group\n"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### D.-Exercise 1\n",
+ "Now, you try it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the headers (hint: getheader)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the data (hint: getdata)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# How many extensions are there in this file? (hint: fits.info())\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# What instrument and mode is this data for? (hint: INSTRUME, EXP_TYPE)\n"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -500,12 +509,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "5.-A different perspective: JWST data models\n",
+ "4.-A different perspective: JWST data models\n",
"------------------\n",
"\n",
- "Now that we've tried using [astropy](https://docs.astropy.org/en/stable/) to examine the data, we can explore an alternative method that removes some of the complexity and peculiarities of JWST data. Here, we will take a look at the format and headers using [JWST data models](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/). \n",
+ "Now that we've tried using [astropy](https://docs.astropy.org/en/stable/) to examine the data, we can explore an alternative method that removes some of the complexity and peculiarities of JWST data. Here, we will take a look at the format and headers using [JWST data models](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/). We mentioned the data models already in the introductory slides. \n",
+ "\n",
+ "JWST data models are important if you are working with JWST data and associated software, since much of the JWST software assumes the use of data models. They help insulate steps, pipelines, and users from the complexities of JWST file formats, and allow us to maintain a common framework for the data across the JWST-specific software. You can think of them as a container for your data that allows for consistency in formatting, data types, and expected headers for all the different types of data. \n",
"\n",
- "There are different data model classes for different kinds of data. Each model generally has several arrays that are associated with it. For example, the ImageModel class has the following arrays associated with it:\n",
+ "There are different data models for different kinds of data. Each model generally has several arrays that are associated with it. For example, the ImageModel has the following arrays associated with it:\n",
"\n",
"* data: The science data\n",
"* dq: The data quality array\n",
@@ -519,21 +530,16 @@
"metadata": {},
"source": [
"### A.-Current models \n",
- "--------------------------------------------------------------------\n",
- "The data model package includes specific and general models to use for both science data and calibration reference files. For example, to generate a FITS file that is compatible with the [Stage 1 calibration pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_detector1.html), you would need to use a model for [up-the-ramp sampled](https://jwst-docs.stsci.edu/understanding-exposure-times#UnderstandingExposureTimes-uptherampHowup-the-rampreadoutswork) IR data: the [RampModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.RampModel.html#jwst.datamodels.RampModel). If instead you would like to analyze a 2-D JWST image, you could use the [ImageModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.ImageModel.html#jwst.datamodels.ImageModel). Or, if you are unsure, you could let the data model package [guess for you](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#opening-a-file).\n",
"\n",
- "The full list of current models is maintained in the [JWST pipeline software](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/attributes.html#list-of-current-models). "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# print a list of the current data models\n"
+ "The data model package includes specific and general models to use for both science data and calibration files. For example, to generate a FITS file that is compatible with the [Stage 1 calibration pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_detector1.html), you would need to use a model for [up-the-ramp sampled](https://jwst-docs.stsci.edu/understanding-exposure-times#UnderstandingExposureTimes-uptherampHowup-the-rampreadoutswork) IR data: the [RampModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.RampModel.html#jwst.datamodels.RampModel). If instead you would like to analyze a 2-D JWST image, you could use the [ImageModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.ImageModel.html#jwst.datamodels.ImageModel). Or, if you are unsure, you could let the data model package [guess for you](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#opening-a-file).\n",
+ "\n",
+ "The full list of current models is maintained in the [JWST pipeline software](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/attributes.html#list-of-current-models). \n",
+ "\n",
+ "You can also get the list programatically:\n",
+ "```python\n",
+ "# Here is a command to print a list of the current JWST data models \n",
+ "inspect.getmembers(datamodels, inspect.isclass)\n",
+ "```"
]
},
{
@@ -578,7 +584,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Open the uncal_obs and let the datamodel package decide which model is best\n"
+ "# Open the uncal_obs file, letting the datamodel package decide which model is best, and use .info()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Or use a specific model (e.g., RampModel):\n"
]
},
{
@@ -600,7 +615,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the data and the shape of the data like with did before \n"
+ "# Get the data and the shape of the data like with did before (use: science_data)\n"
]
},
{
@@ -641,7 +656,7 @@
},
"outputs": [],
"source": [
- "# Check out the schema\n"
+ "# Check out the schema or framework\n"
]
},
{
@@ -660,6 +675,15 @@
"# Search the schema\n"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Or, use \"search\" to get more detailed information (e.g., data type)\n"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -670,7 +694,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "scrolled": true
+ },
"outputs": [],
"source": [
"# Look at all the metadata \n"
@@ -733,6 +759,32 @@
"# See, it's easy!\n"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### D.-Exercise 2\n",
+ "Now, you try it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Try loading the exercise data (\"exercise_obs\") using a model (hint: datamodels.open())\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# How can I figure out the readout pattern for this data? (hint: search_schema(), search(key=\"read\"))\n"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -744,7 +796,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "6.-Other ways to use the models \n",
+ "5.-Bonus: Other ways to use the models \n",
"--------------------------------------------------------------------\n",
"The data models can be used to [create data from scratch](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#creating-a-data-model-from-scratch) or to [read in an existing FITS file or data array](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#creating-a-data-model-from-a-file). This is useful if you are trying to run an exposure through the JWST pipeline or read in an exposure to a JWST software tool or data analysis notebook, because certain checks on the data and metadata are performed when added to an existing model. Simulated data created using ```Mirage``` or ```Mirisim``` is directly compatible with the JWST pipeline, because both software tools use the data models during the creation of the simulations. "
]
@@ -771,7 +823,7 @@
},
"outputs": [],
"source": [
- "# Create an ImageModel from scratch with size (1024, 1024), and search the schema for \"instrument\"\n"
+ "# Create an ImageModel from scratch with size (1024, 1024), and search the schema for \"instrument\" keywords\n"
]
},
{
@@ -906,15 +958,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "7.-Simulations\n",
- "--------------------------------------------------------------------\n",
+ "### C.-Simulations\n",
+ "\n",
"The benefit to using existing simulation software such as [Mirage](https://jwst-docs.stsci.edu/jwst-other-tools/mirage-data-simulator) (for NIRCam, NIRISS, and FGS simulations) or [Mirisim](https://www.stsci.edu/jwst/science-planning/proposal-planning-toolbox/mirisim) (for MIRI simulations) is that the outputs are directly compatible with JWST software, such as the [calibration pipeline](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline). "
]
},
@@ -970,18 +1015,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "8.-Exercise\n",
+ "6.-Exercise solutions \n",
"--------------------------------------------------------------------\n",
- "Now, you try it!"
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "# Load the exercise data using FITS\n"
+ "### Exercise 1"
]
},
{
@@ -990,7 +1033,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now try loading the exercise data using a model\n"
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the headers (hint: getheader)\n",
+ "mystery_header = fits.getheader(exercise_obs, 'PRIMARY')"
]
},
{
@@ -999,7 +1043,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# What instrument and mode is this data for?\n"
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the data (hint: getdata)\n",
+ "mystery_data = fits.getdata(exercise_obs,'SCI')"
]
},
{
@@ -1008,7 +1053,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many integrations and groups are there?\n"
+ "# How many extensions are there in this file? (hint: fits.info())\n",
+ "fits.info(exercise_obs)"
]
},
{
@@ -1017,7 +1063,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Does that match the data shape? \n"
+ "# What instrument and mode is this data for? (hint: INSTRUME, EXP_TYPE)\n",
+ "mystery_header['INSTRUME'], mystery_header['EXP_TYPE']"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
@@ -1026,7 +1080,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# What is the model metadata path to find the readout pattern? \n"
+ "# Try loading the exercise data (\"exercise_obs\") using a model (hint: datamodels.open())\n",
+ "mystery_data = datamodels.open(exercise_obs)\n",
+ "mystery_data.info()"
]
},
{
@@ -1035,7 +1091,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the 3rd group in the 1st integration \n"
+ "# How can I figure out the readout pattern for this data? (hint: search_schema(), search(key=\"read\"))\n",
+ "mystery_data.search_schema('read')\n",
+ "mystery_data.search(key='read')"
]
},
{
diff --git a/pipeline_products_session/jwst-data-products-part1-static.ipynb b/pipeline_products_session/jwst-data-products-part1-solutions.ipynb
similarity index 72%
rename from pipeline_products_session/jwst-data-products-part1-static.ipynb
rename to pipeline_products_session/jwst-data-products-part1-solutions.ipynb
index f144c2b..9005c0a 100644
--- a/pipeline_products_session/jwst-data-products-part1-static.ipynb
+++ b/pipeline_products_session/jwst-data-products-part1-solutions.ipynb
@@ -7,26 +7,43 @@
"\n",
"# JWST Data Products: Uncalibrated Data \n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: March 24, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: April 26, 2021.\n",
"\n",
+ "\n",
+ "
Notebook Goals
\n",
+ "
Using an uncalibrated (raw) JWST exposure, we will:
\n",
+ "
\n",
+ " - Begin exploring JWST data formats and meta data using Astropy tools
\n",
+ " - Introduce JWST data models and use them to explore our data
\n",
+ " - Bonus information: Other uses for the data models
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Examining an exposure with astropy](#astro)\n",
+ " 2. [Data in MAST](#mast)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Examining an exposure with astropy](#astro)\n",
" 1. [Format](#astro-format)\n",
" 2. [Metadata](#astro-meta)\n",
" 3. [Vizualizing data](#astro-viz)\n",
- "5. [A different perspective: JWST data models](#model) \n",
+ " 4. [Exercise 1](#exercise-1)\n",
+ "4. [A different perspective: JWST data models](#model) \n",
" 1. [Current models](#list)\n",
- " 1. [Format](#model-format)\n",
- " 2. [Metadata](#model-meta)\n",
- "6. [Other ways to use the models](#use)\n",
+ " 2. [Format](#model-format)\n",
+ " 3. [Metadata](#model-meta)\n",
+ " 4. [Exercise 2](#exercise-2) \n",
+ "5. [Bonus: Other ways to use the models](#use)\n",
" 1. [Create data from scratch](#scratch)\n",
" 2. [Create data from a file](#file)\n",
- "7. [Simulations](#simulations)\n",
- "8. [Exercise](#exercise)"
+ " 3. [Simulations](#simulations)\n",
+ "6. [Exercise solutions](#solutions) "
]
},
{
@@ -36,7 +53,7 @@
"1.-Introduction \n",
"------------------\n",
"\n",
- "Welcome to the first module about JWST data products! JWST is a complex observatory with four instruments and many modes, so there is a lot to learn about about the different types of data and their formats, and the tools available to help observers examine and analyze their data. In this session, we will examine JWST data products and how they change as they go through the pipeline. We will start with uncalibrated data and proceed through the processing stages of the JWST data calibration pipeline (hereafter, the pipeline) in separate modules, highlighting important notes along the way. Detailed information about how to run the pipeline will be saved for the next couple of JWebbinars.\n",
+ "Welcome to the first module about JWST data products! JWST is a complex observatory with four instruments and many modes, so there is a lot to learn about about the different types of data and their formats, and the tools available to help observers examine and analyze their data. In this JWebbinar, we will examine JWST data products and how they change as they go through the pipeline. We will start with uncalibrated data and proceed through the processing stages of the JWST data calibration pipeline (hereafter, the pipeline) in separate modules, highlighting important notes along the way. Detailed information about how to run the pipeline will be saved for the next couple of JWebbinars.\n",
"\n",
"Most JWST science data products are in FITS format, which should be familiar to observers. However, there are ancillary input and output files for the pipeline that are not; there are JSON files (used to associate different observations), ASDF files (typically pipeline configuration files), and ECSV files (for ASCII table data, such as catalogs). \n",
"\n",
@@ -45,11 +62,26 @@
"### A.-Resources\n",
"\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -64,8 +96,8 @@
"import os\n",
"import inspect\n",
"\n",
- "# To get data from Box\n",
- "import requests\n",
+ "# Image loader\n",
+ "from IPython.display import Image\n",
"\n",
"# Numpy library:\n",
"import numpy as np\n",
@@ -121,48 +153,14 @@
"metadata": {},
"outputs": [],
"source": [
- "def download_file(url):\n",
- " \"\"\"Download into the current working directory the\n",
- " file from Box given the direct URL\n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " url : str\n",
- " URL to the file to be downloaded\n",
- " \n",
- " Returns\n",
- " -------\n",
- " download_filename : str\n",
- " Name of the downloaded file\n",
- " \"\"\"\n",
- " response = requests.get(url, stream=True)\n",
- " if response.status_code != 200:\n",
- " raise RuntimeError(\"Wrong URL - {}\".format(url))\n",
- " download_filename = response.headers['Content-Disposition'].split('\"')[1]\n",
- " with open(download_filename, 'wb') as f:\n",
- " for chunk in response.iter_content(chunk_size=1024):\n",
- " if chunk:\n",
- " f.write(chunk)\n",
- " return download_filename"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=4000, vmax=12000)\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -181,18 +179,13 @@
"metadata": {},
"outputs": [],
"source": [
- "def plot_ramp(groups, signal, xpixel=None, ypixel=None, title=None):\n",
+ "def plot_ramp(groups, signal, title=None):\n",
" ''' Function to generate the ramp for pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " if xpixel and ypixel:\n",
- " plt.plot(groups, signal, marker='o', label='Pixel ('+str(xpixel)+','+str(ypixel)+')') \n",
- " plt.legend(loc=2)\n",
- "\n",
- " else:\n",
- " plt.plot(groups, signal, marker='o')\n",
+ " plt.plot(groups, signal, marker='o')\n",
" \n",
" plt.xlabel('Groups')\n",
" plt.ylabel('Signal (DN)')\n",
@@ -214,36 +207,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use an uncalibrated NIRCam simulated imaging exposure that is stored in Box. For the exercise, we won't tell you what it is. You have to figure it out yourself! Let's grab the exposures:"
@@ -259,11 +223,20 @@
"source": [
"# Data for the notebook\n",
"uncal_obs_link = \"https://stsci.box.com/shared/static/mpbrc3lszdjif6kpcw1acol00e0mm2zh.fits\"\n",
- "uncal_obs = download_file(uncal_obs_link)\n",
+ "uncal_obs = \"example_nircam_imaging_uncal.fits\"\n",
+ "demo_file = download_file(uncal_obs_link+uncal_obs)\n",
"\n",
- "# Data for the exercise \n",
+ "# Data for the exercise \n",
"exercise_obs_link = \"https://stsci.box.com/shared/static/l1aih8rmwbtzyupv8hsl0adfa36why30.fits\"\n",
- "exercise_obs = download_file(exercise_obs_link) "
+ "exercise_obs = \"example_exercise_uncal.fits\"\n",
+ "demo_ex_file = download_file(exercise_obs_link+exercise_obs)\n",
+ "\n",
+ "# Save the files so that we can use them later\n",
+ "with fits.open(demo_file, ignore_missing_end=True) as f:\n",
+ " f.writeto(uncal_obs, overwrite=True)\n",
+ " \n",
+ "with fits.open(demo_ex_file, ignore_missing_end=True) as f:\n",
+ " f.writeto(exercise_obs, overwrite=True) "
]
},
{
@@ -277,23 +250,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Examining an exposure with astropy\n",
+ "3.-Examining an exposure with astropy\n",
"------------------\n",
"\n",
"Many of you may be familiar with using [astropy](https://docs.astropy.org/en/stable/) to examine data. Here, we will take a look at the format and headers using standard ```astropy``` tools. \n",
"\n",
"### A.-Format\n",
"\n",
- "Below, we see the typical extensions in a raw JWST data file. All data related to the product are contained in one or more FITS IMAGE or BINTABLE extensions, and the header of each extension may contain keywords that are uniquely related to that extension.\n",
- "\n",
- "* PRIMARY: The primary Header Data Unit (HDU) only contains header information, in the form of keyword records, with an empty data array (indicated by the occurence of NAXIS=0 in the primary header. Meta data that pertains to the entire product is stored in keywords in the primary header. Meta data related to specific extensions (see below) is stored in keywords in the headers of each extension.\n",
- "* SCI: 4-D data array containing the raw pixel values. The first two dimensions are equal to the size of the detector readout, with the data from multiple groups (NGROUPS) within each integration stored along the 3rd axis, and the multiple integrations (NINTS) stored along the 4th axis.\n",
- "* ZEROFRAME: 3-D data array containing the pixel values of the zero-frame for each integration in the exposure, where each plane of the cube corresponds to a given integration. Only appears if the zero-frame data were requested to be downlinked separately.\n",
- "* GROUP: A table of meta data for some (or all) of the data groups.\n",
- "* INT_TIMES: A table of begining, middle, and end time stamps for each integration in the exposure.\n",
- "* ADSF: The data model meta data.\n",
- "\n",
- "Additional extensions can be included for certain instruments and readout types. The [JWST software readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/science_products.html) contains the most up-to-date information about JWST formats. "
+ "Below, we see the typical extensions in a raw JWST data file. All data related to the product are contained in one or more FITS IMAGE or BINTABLE extensions, and the header of each extension may contain keywords that are uniquely related to that extension."
]
},
{
@@ -302,10 +266,26 @@
"metadata": {},
"outputs": [],
"source": [
- "# Let's take a high level look at our uncalibrated file \n",
+ "# Let's take a high level look at our uncalibrated file with .info()\n",
"fits.info(uncal_obs)"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So what you see above is:\n",
+ "\n",
+ "* PRIMARY: The primary Header Data Unit (HDU) only contains header information, in the form of keyword records, with an empty data array (indicated by the occurence of NAXIS=0 in the primary header. Meta data that pertains to the entire product is stored in keywords in the primary header. Meta data related to specific extensions (see below) is stored in keywords in the headers of each extension.\n",
+ "* SCI: 4-D data array containing the raw pixel values. The first two dimensions are equal to the size of the detector readout, with the data from multiple groups (NGROUPS) within each integration stored along the 3rd axis, and the multiple integrations (NINTS) stored along the 4th axis.\n",
+ "* ZEROFRAME: 3-D data array containing the pixel values of the zero-frame for each integration in the exposure, where each plane of the cube corresponds to a given integration. Only appears if the zero-frame data were requested to be downlinked separately.\n",
+ "* GROUP: A table of meta data for some (or all) of the data groups.\n",
+ "* ADSF: The data model meta data. This extension can be read using The Advanced Scientific Data Format (ASDF), which is a next-generation format for scientific data. ASDF is a tool for reading and writing ASDF files. More information about the ASDF file standard is in the [ASDF software readthedocs](https://asdf.readthedocs.io/en/stable/).\n",
+ "* (INT_TIMES): You may also see a table of begining, middle, and end time stamps for each integration in the exposure.\n",
+ "\n",
+ "Additional extensions can be included for certain instruments and readout types. The [JWST software readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/science_products.html) contains the most up-to-date information about JWST formats. "
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -320,12 +300,7 @@
"outputs": [],
"source": [
"# Use \"science_data\" as your data array name for the \"SCI\" extension \n",
- "science_data = fits.getdata(uncal_obs, 'SCI')\n",
- "\n",
- "# or \n",
- "\n",
- "with fits.open(uncal_obs) as hdu:\n",
- " science_data = hdu['SCI'].data"
+ "science_data = fits.getdata(uncal_obs, 'SCI')"
]
},
{
@@ -342,7 +317,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The science data shape here shows the number of integrations, groups, rows (pixels), and columns (pixels), which reflects the up-the-ramp readout (also referred to as MULTIACCUM) standardized readout sampling for all JWST detectors (read more in the [JWST User Documentation](https://jwst-docs.stsci.edu/understanding-exposure-times)). We'll talk about this more in the following sections. For now, let's look at the associated headers and other metadata. "
+ "The science data shape here shows the number of integrations, groups, rows (pixels), and columns (pixels), which reflects the up-the-ramp readout (also referred to as MULTIACCUM) standardized readout sampling for all JWST detectors (read more in the [JWST User Documentation](https://jwst-docs.stsci.edu/understanding-exposure-times)). Let's look at the associated headers and other metadata. "
]
},
{
@@ -367,55 +342,26 @@
"source": [
"# Let's get the primary and science headers (use: primary_headers, science_headers)\n",
"primary_headers = fits.getheader(uncal_obs,0)\n",
- "science_headers = fits.getheader(uncal_obs,1)\n",
- "\n",
- "# or \n",
- "\n",
- "with fits.open(uncal_obs) as hdu:\n",
- " primary_headers = hdu['PRIMARY'].header\n",
- " science_headers = hdu['SCI'].header"
+ "science_headers = fits.getheader(uncal_obs,1)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# What's the observation ID, instrument, exposure type, detector? \n",
- "print('Observation ID: ', primary_headers['OBS_ID'])\n",
- "print('Instrument: ', primary_headers['INSTRUME'])\n",
- "print('Exposure type: ', primary_headers['EXP_TYPE'])\n",
- "print('Detector: ', primary_headers['DETECTOR'])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
+ "metadata": {
+ "scrolled": true
+ },
"outputs": [],
"source": [
- "# What about the data dimensions? Integrations, groups, xsize, ysize?\n",
- "print('\\nNumber of data dimensions: ', len(science_data.shape))\n",
- "print('Number of integrations: ', primary_headers['NINTS'])\n",
- "print('Number of groups: ', primary_headers['NGROUPS'])\n",
- "print('Number of rows: ', primary_headers['SUBSIZE1'])\n",
- "print('Number of columns: ', primary_headers['SUBSIZE2'])"
+ "# Print all the primary headers \n",
+ "primary_headers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Additional metadata is stored in the ASDF extension. This extension can be read using The Advanced Scientific Data Format (ASDF), which is a next-generation format for scientific data. ASDF is a tool for reading and writing ASDF files. More information about the ASDF file standard is in the [ASDF software readthedocs](https://asdf.readthedocs.io/en/stable/). The format has the following features:\n",
- "\n",
- "* A hierarchical, human-readable metadata format (implemented using YAML)\n",
- "* Numerical arrays are stored as binary data blocks which can be memory mapped. Data blocks can optionally be compressed.\n",
- "* The structure of the data can be automatically validated using schemas (implemented using JSON Schema)\n",
- "* Native Python data types (numerical types, strings, dicts, lists) are serialized automatically\n",
- "* ASDF can be extended to serialize custom data types\n",
- "\n",
- "Right now, you don't need to worry about ASDF too much. We'll talk about it more when we discuss configuration files and accessing the WCS information in the following modules. Below, we provide a simple example of how to access the ASDF extension:"
+ "Search for FITS headers with the wildcard asterisk:"
]
},
{
@@ -424,10 +370,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Grab the ASDF extension data and header (use: asdf_data, asdf_metadata)\n",
- "with fits.open(uncal_obs) as hdu:\n",
- " asdf_metadata = hdu['ASDF'].header\n",
- " asdf_data = hdu['ASDF'].data "
+ "# Try finding all headers with \"OBS\" in the name\n",
+ "primary_headers['OBS*']"
]
},
{
@@ -436,7 +380,8 @@
"metadata": {},
"outputs": [],
"source": [
- "asdf_metadata"
+ "# What's the observation ID, instrument, exposure type?\n",
+ "primary_headers['OBS_ID'],primary_headers['INSTRUME'],primary_headers['EXP_TYPE']"
]
},
{
@@ -445,7 +390,8 @@
"metadata": {},
"outputs": [],
"source": [
- "asdf_data"
+ "# What about the data dimensions? Integrations, groups, xsize, ysize?\n",
+ "primary_headers['NINTS'], primary_headers['NGROUPS'], primary_headers['SUBSIZE1'], primary_headers['SUBSIZE2']"
]
},
{
@@ -459,9 +405,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In the previous section, we mentioned [up-the-ramp sampling](https://jwst-docs.stsci.edu/understanding-exposure-times) for IR detectors. During an integration, the detectors accumulate charge while being read out multiple times following predefined readout patterns for the different instruments. The readout process is non-destructive, leaving charge unaffected and in place (charge is not transferred between pixels as in CCDs). After each integration, the pixels are read out a final time and then reset, releasing their charge. \n",
- "\n",
- "Multiple non-destructive *frames* are averaged into a *group*, depending on the readout pattern selected. Breaking exposures into multiple *integrations* is most useful for bright sources that would saturate in longer integrations. \n",
+ "If you remember from the introductory slides, we mentioned [up-the-ramp sampling](https://jwst-docs.stsci.edu/understanding-exposure-times) for IR detectors. Multiple non-destructive *frames* are averaged into a *group*, depending on the readout pattern selected. Exposures are broken up into multiple *integrations*, which is useful for sources that would saturate in longer integrations. \n",
"\n",
"As such, the components of each up-the-ramp exposure are: \n",
"* NINTS: number of integrations per exposure.\n",
@@ -473,7 +417,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's select one integration for a particular pixel and examine the ramp. **Note**: this is uncalibrated data, so the detector effects are still present and the signal in each group will vary due to bias drift, reference pixel corrections, etc. not being performed yet. "
+ "Let's select one **integration** for a particular pixel and examine the ramp, and then one **group** to look at the detector image. *Note*: this is uncalibrated data, so the detector effects are still present and the signal in each group will vary due to bias drift, reference pixel corrections, etc. not being performed yet. "
]
},
{
@@ -486,7 +430,7 @@
"integration = 0\n",
"pixel_y = 741\n",
"pixel_x = 1798\n",
- "group = -1"
+ "group = -1 "
]
},
{
@@ -507,14 +451,14 @@
"outputs": [],
"source": [
"# Plot the ramp\n",
- "plot_ramp(groups, signal_adu, xpixel=pixel_x, ypixel=pixel_y, title='Example ramp')"
+ "plot_ramp(groups, signal_adu)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also visualize the full NIRCam array for the last group in our integration, below. Again, this is a raw exposure, so none of the detector effects have been removed. The four amplifiers of the detector are visible, along with other features (e.g., an epoxy void region). "
+ "Next, we can visualize the full NIRCam array for the group we selected above. Again, this is a raw exposure, so none of the detector effects have been removed. The four amplifiers of the detector are visible, along with other features (e.g., an epoxy void region). "
]
},
{
@@ -524,7 +468,51 @@
"outputs": [],
"source": [
"# Create an image of one integration and one group\n",
- "create_image(science_data[integration, group, :, :], 4000, 12000, xpixel=pixel_x, ypixel=pixel_y, title=\"Last group image\")"
+ "create_image(science_data[integration, group, :, :])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### D.-Exercise 1\n",
+ "Now, you try it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the headers (hint: getheader)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the data (hint: getdata)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# How many extensions are there in this file? (hint: fits.info())\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# What instrument and mode is this data for? (hint: INSTRUME, EXP_TYPE)\n"
]
},
{
@@ -538,12 +526,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "5.-A different perspective: JWST data models\n",
+ "4.-A different perspective: JWST data models\n",
"------------------\n",
"\n",
- "Now that we've tried using [astropy](https://docs.astropy.org/en/stable/) to examine the data, we can explore an alternative method that removes some of the complexity and peculiarities of JWST data. Here, we will take a look at the format and headers using [JWST data models](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/). \n",
+ "Now that we've tried using [astropy](https://docs.astropy.org/en/stable/) to examine the data, we can explore an alternative method that removes some of the complexity and peculiarities of JWST data. Here, we will take a look at the format and headers using [JWST data models](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/). We mentioned the data models already in the introductory slides. \n",
+ "\n",
+ "JWST data models are important if you are working with JWST data and associated software, since much of the JWST software assumes the use of data models. They help insulate steps, pipelines, and users from the complexities of JWST file formats, and allow us to maintain a common framework for the data across the JWST-specific software. You can think of them as a container for your data that allows for consistency in formatting, data types, and expected headers for all the different types of data. \n",
"\n",
- "There are different data model classes for different kinds of data. Each model generally has several arrays that are associated with it. For example, the ImageModel class has the following arrays associated with it:\n",
+ "There are different data models for different kinds of data. Each model generally has several arrays that are associated with it. For example, the ImageModel has the following arrays associated with it:\n",
"\n",
"* data: The science data\n",
"* dq: The data quality array\n",
@@ -557,22 +547,16 @@
"metadata": {},
"source": [
"### A.-Current models \n",
- "--------------------------------------------------------------------\n",
- "The data model package includes specific and general models to use for both science data and calibration reference files. For example, to generate a FITS file that is compatible with the [Stage 1 calibration pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_detector1.html), you would need to use a model for [up-the-ramp sampled](https://jwst-docs.stsci.edu/understanding-exposure-times#UnderstandingExposureTimes-uptherampHowup-the-rampreadoutswork) IR data: the [RampModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.RampModel.html#jwst.datamodels.RampModel). If instead you would like to analyze a 2-D JWST image, you could use the [ImageModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.ImageModel.html#jwst.datamodels.ImageModel). Or, if you are unsure, you could let the data model package [guess for you](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#opening-a-file).\n",
"\n",
- "The full list of current models is maintained in the [JWST pipeline software](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/attributes.html#list-of-current-models). "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# print a list of the current data models\n",
- "inspect.getmembers(datamodels, inspect.isclass)"
+ "The data model package includes specific and general models to use for both science data and calibration files. For example, to generate a FITS file that is compatible with the [Stage 1 calibration pipeline](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_detector1.html), you would need to use a model for [up-the-ramp sampled](https://jwst-docs.stsci.edu/understanding-exposure-times#UnderstandingExposureTimes-uptherampHowup-the-rampreadoutswork) IR data: the [RampModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.RampModel.html#jwst.datamodels.RampModel). If instead you would like to analyze a 2-D JWST image, you could use the [ImageModel](https://jwst-pipeline.readthedocs.io/en/latest/api/jwst.datamodels.ImageModel.html#jwst.datamodels.ImageModel). Or, if you are unsure, you could let the data model package [guess for you](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#opening-a-file).\n",
+ "\n",
+ "The full list of current models is maintained in the [JWST pipeline software](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/attributes.html#list-of-current-models). \n",
+ "\n",
+ "You can also get the list programatically:\n",
+ "```python\n",
+ "# Here is a command to print a list of the current JWST data models \n",
+ "inspect.getmembers(datamodels, inspect.isclass)\n",
+ "```"
]
},
{
@@ -619,11 +603,18 @@
"metadata": {},
"outputs": [],
"source": [
- "# Open the uncal_obs and let the datamodel package decide which model is best\n",
- "with datamodels.open(uncal_obs) as model:\n",
- " model.info()\n",
- " \n",
- "## or use a specific model:\n",
+ "# Open the uncal_obs file, letting the datamodel package decide which model is best, and use .info()\n",
+ "model = datamodels.open(uncal_obs)\n",
+ "model.info()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Or use a specific model (e.g., RampModel):\n",
"model = datamodels.RampModel(uncal_obs)\n",
"model.info()"
]
@@ -647,7 +638,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the data and the shape of the data like with did before \n",
+ "# Get the data and the shape of the data like with did before (use: science_data)\n",
"science_data = model.data\n",
"science_data.shape"
]
@@ -666,7 +657,7 @@
"outputs": [],
"source": [
"# Create an image of one integration and one group, as before\n",
- "create_image(science_data[integration, group, :, :], 4000, 12000, xpixel=pixel_x, ypixel=pixel_y)"
+ "create_image(science_data[integration, group, :, :])"
]
},
{
@@ -691,7 +682,7 @@
},
"outputs": [],
"source": [
- "# Check out the schema\n",
+ "# Check out the schema or framework\n",
"model.schema"
]
},
@@ -712,6 +703,16 @@
"model.search_schema('target')"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Or, use \"search\" to get more detailed information (e.g., data type)\n",
+ "model.search(key='dec')"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -722,7 +723,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "scrolled": true
+ },
"outputs": [],
"source": [
"# Look at all the metadata \n",
@@ -790,6 +793,32 @@
"model.meta.observation.date"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### D.-Exercise 2\n",
+ "Now, you try it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Try loading the exercise data (\"exercise_obs\") using a model (hint: datamodels.open())\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# How can I figure out the readout pattern for this data? (hint: search_schema(), search(key=\"read\"))\n"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -801,7 +830,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "6.-Other ways to use the models \n",
+ "5.-Bonus: Other ways to use the models \n",
"--------------------------------------------------------------------\n",
"The data models can be used to [create data from scratch](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#creating-a-data-model-from-scratch) or to [read in an existing FITS file or data array](https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/models.html#creating-a-data-model-from-a-file). This is useful if you are trying to run an exposure through the JWST pipeline or read in an exposure to a JWST software tool or data analysis notebook, because certain checks on the data and metadata are performed when added to an existing model. Simulated data created using ```Mirage``` or ```Mirisim``` is directly compatible with the JWST pipeline, because both software tools use the data models during the creation of the simulations. "
]
@@ -828,9 +857,9 @@
},
"outputs": [],
"source": [
- "# Create an ImageModel from scratch with size (1024, 1024), and search the schema for \"instrument\"\n",
- "with datamodels.ImageModel((1024, 1024)) as im:\n",
- " print(im.search_schema('instrument'))"
+ "# Create an ImageModel from scratch with size (1024, 1024), and search the schema for \"instrument\" keywords\n",
+ "im = datamodels.ImageModel((1024, 1024))\n",
+ "im.search_schema('instrument')"
]
},
{
@@ -849,8 +878,7 @@
"# Create empty DQ and data arrays using numpy, then load them into the ImageModel\n",
"data = np.empty((50, 50))\n",
"dq = np.empty((50, 50))\n",
- "with datamodels.ImageModel(data=data, dq=dq) as im:\n",
- " print(im.search_schema('exposure'))"
+ "im = datamodels.ImageModel(data=data, dq=dq)"
]
},
{
@@ -974,15 +1002,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "7.-Simulations\n",
- "--------------------------------------------------------------------\n",
+ "### C.-Simulations\n",
+ "\n",
"The benefit to using existing simulation software such as [Mirage](https://jwst-docs.stsci.edu/jwst-other-tools/mirage-data-simulator) (for NIRCam, NIRISS, and FGS simulations) or [Mirisim](https://www.stsci.edu/jwst/science-planning/proposal-planning-toolbox/mirisim) (for MIRI simulations) is that the outputs are directly compatible with JWST software, such as the [calibration pipeline](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline). "
]
},
@@ -1038,21 +1059,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "8.-Exercise\n",
+ "6.-Exercise solutions \n",
"--------------------------------------------------------------------\n",
- "Now, you try it!"
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "# Load the exercise data using FITS\n",
- "with fits.open(exercise_obs) as h:\n",
- " mystery_data = h['SCI'].data\n",
- " mystery_header = h['PRIMARY'].header"
+ "### Exercise 1"
]
},
{
@@ -1061,9 +1077,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now try loading the exercise data using a model\n",
- "with datamodels.open(exercise_obs) as mystery_data:\n",
- " mystery_data.info()"
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the headers (hint: getheader)\n",
+ "mystery_header = fits.getheader(exercise_obs, 'PRIMARY')"
]
},
{
@@ -1072,8 +1087,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# What instrument and mode is this data for?\n",
- "mystery_data.meta.instrument.name, mystery_data.meta.exposure.type"
+ "# Load the exercise file (\"exercise_obs\") using FITS to get the data (hint: getdata)\n",
+ "mystery_data = fits.getdata(exercise_obs,'SCI')"
]
},
{
@@ -1082,8 +1097,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many integrations and groups are there?\n",
- "mystery_data.meta.exposure.nints, mystery_data.meta.exposure.ngroups"
+ "# How many extensions are there in this file? (hint: fits.info())\n",
+ "fits.info(exercise_obs)"
]
},
{
@@ -1092,8 +1107,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Does that match the data shape? \n",
- "mystery_data.shape"
+ "# What instrument and mode is this data for? (hint: INSTRUME, EXP_TYPE)\n",
+ "mystery_header['INSTRUME'], mystery_header['EXP_TYPE']"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
@@ -1102,8 +1124,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# What is the model metadata path to find the readout pattern? \n",
- "mystery_data.search_schema('read')"
+ "# Try loading the exercise data (\"exercise_obs\") using a model (hint: datamodels.open())\n",
+ "mystery_data = datamodels.open(exercise_obs)\n",
+ "mystery_data.info()"
]
},
{
@@ -1112,8 +1135,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the 3rd group in the 1st integration \n",
- "create_image(mystery_data.data[0, 2, :, :], 6000, 20000)"
+ "# How can I figure out the readout pattern for this data? (hint: search_schema(), search(key=\"read\"))\n",
+ "mystery_data.search_schema('read')\n",
+ "mystery_data.search(key='read')"
]
},
{
diff --git a/pipeline_products_session/jwst-data-products-part2-live.ipynb b/pipeline_products_session/jwst-data-products-part2-live.ipynb
index 01fdc3c..c1faecd 100644
--- a/pipeline_products_session/jwst-data-products-part2-live.ipynb
+++ b/pipeline_products_session/jwst-data-products-part2-live.ipynb
@@ -7,27 +7,45 @@
"\n",
"# JWST Data Products: Calibrated Individual Exposures and WCS\n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) with exerpts from Espinoza, Sosey | **Latest update**: March 30, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) with exerpts from Espinoza, Sosey | **Latest update**: April 26, 2021.\n",
"\n",
+ "\n",
+ "
Notebook Goals
\n",
+ "
Using JWST data models, we will:
\n",
+ "
\n",
+ " - Explore Stage 1 data products (detector corrections)
\n",
+ " - Examine Stage 2 imaging and spectroscopic data products (calibrated individual exposures)
\n",
+ " - Take a closer look at WCS information for JWST data
\n",
+ " - Bonus information: JWST associations
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Data products: stage 1 (detector corrections)](#stage1)\n",
+ " 2. [Data in MAST](#mast)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Data products: stage 1 (detector corrections)](#stage1)\n",
" 1. [Input](#s1-input)\n",
" 2. [Output](#s1-output)\n",
- " 3. [Examining the products](#s1-examine)\n",
- "5. [Associations](#associations)\n",
- "6. [Data products: stage 2 (calibrated exposures)](#stage2)\n",
+ " 3. [Examining the pipeline products](#s1-examine)\n",
+ " 4. [Exercise 1](#exercise-1)\n",
+ "4. [Data products: stage 2 (calibrated exposures)](#stage2)\n",
" 1. [Imaging](#s2-imaging)\n",
" 1. [Input](#s2-imaging-input)\n",
" 2. [Output](#s2-imaging-output)\n",
" 2. [Spectroscopy](#s2-spectroscopy)\n",
" 1. [Input](#s2-spectroscopy-input)\n",
" 2. [Output](#s2-spectroscopy-output)\n",
- "7. [WCS deep dive](#wcs)\n",
- "8. [Exercise](#exercise)"
+ "5. [WCS deep dive](#wcs)\n",
+ " 1. [Exercise 2](#exercise-2) \n",
+ "6. [Bonus: Associations](#associations)\n",
+ "7. [Exercise Solutions](#solutions)"
]
},
{
@@ -41,11 +59,27 @@
"\n",
"### A.-Resources\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -119,17 +153,19 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
" \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " if 'IMAGE' in data_2d.meta.exposure.type:\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=0, vmax=10)\n",
+ " \n",
+ " elif 'WFSS' in data_2d.meta.exposure.type:\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=-0.05, vmax=0.5) \n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -139,7 +175,7 @@
"\n",
" fig.tight_layout()\n",
" plt.subplots_adjust(left=0.15)\n",
- " plt.colorbar(label='DN')"
+ " plt.colorbar(label=data_2d.meta.bunit_data)"
]
},
{
@@ -148,21 +184,15 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_slit_image(data_2d, slit_number, vmin=None, vmax=None, title=None):\n",
+ "def create_slit_image(data_2d, slit_number, title=None):\n",
" ''' Function to generate a 2D image of a particular slit.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
" \n",
- " if vmin and vmax:\n",
- " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " plt.colorbar(label='DN/sec') \n",
- " else:\n",
- " minimum = data_2d.slits[slit_number].data.min()\n",
- " maximum = data_2d.slits[slit_number].data.max()\n",
- " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray', vmin=minimum, vmax=maximum)\n",
- " plt.colorbar(label='DN/sec') \n",
+ " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray',vmin=-0.1, vmax=0.3)\n",
+ " plt.colorbar(label=data_2d.meta.bunit_data) \n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -177,120 +207,6 @@
" plt.subplots_adjust(left=0.15)"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_ramp(groups, signal, xpixel=None, ypixel=None, title=None):\n",
- " ''' Function to generate the ramp for pixel.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(8, 8))\n",
- " ax = plt.subplot()\n",
- " if xpixel and ypixel:\n",
- " plt.plot(groups, signal, marker='o', label='Pixel ('+str(xpixel)+','+str(ypixel)+')') \n",
- " plt.legend(loc=2)\n",
- "\n",
- " else:\n",
- " plt.plot(groups, signal, marker='o')\n",
- " \n",
- " plt.xlabel('Groups')\n",
- " plt.ylabel('Signal (DN)')\n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_column(data_2d, column, title=None):\n",
- " ''' Function to generate a plot for one column in a dispersed image.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(10, 5))\n",
- " ax = plt.subplot()\n",
- " plt.plot(data_2d[:,column], label='Column '+str(column))\n",
- " \n",
- " plt.xlabel('Pixel row')\n",
- " plt.ylabel('Column values')\n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)\n",
- " else:\n",
- " plt.title('WFSS plot of one column in dispersed image')\n",
- " \n",
- " plt.legend()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_spectra(spec, number, median_filter=None, title=None):\n",
- " ''' Function to generate the spectrum for a slit.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(10, 5))\n",
- " ax = plt.subplot()\n",
- " \n",
- " if median_filter:\n",
- " plt.plot(spec.spec[number].spec_table['WAVELENGTH'], medfilt(spec.spec[number].spec_table['FLUX'],median_filter)) \n",
- " \n",
- " else: \n",
- " plt.plot(spec.spec[number].spec_table['WAVELENGTH'], spec.spec[number].spec_table['FLUX']) \n",
- "\n",
- " \n",
- " plt.xlabel('Wavelength (um)')\n",
- " plt.ylabel('Flux')\n",
- " \n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)\n",
- " else:\n",
- " title='Spectrum for Source '+str(spec.spec[number].source_id)+', Spectral Order '+str(spec.spec[number].spectral_order)\n",
- " plt.title(title)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -302,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use calibrated NIRCam simulated imaging and wide field slitless spectroscopy (WFSS) exposures that are stored in Box. Let's grab the data:"
@@ -356,11 +272,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Data products: stage 1 (detector corrections)\n",
+ "3.-Data products: stage 1 (detector corrections)\n",
"------------------\n",
"\n",
"All JWST data, regardless of the instrument or mode (with the exception of a few specific engineering or calibration cases), is processed through the CALWEBB_DETECTOR1 module, which is Stage 1 of the pipeline. A number of instrument signatures are accounted for in this stage, such as bias corrections and cosmic ray flagging, and slopes are fit to the corrected ramps. More information can be found in the [JWST User Documentation for CALWEBB_DETECTOR1](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing/calwebb_detector1). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
+ "Detailed information about data products for this stage are in [the software Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_detector1.html#inputs).\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -398,14 +316,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### C.-Examining the products"
+ "### C.-Examining the pipeline products"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's take a closer look at some data products. As our first example, we can grab the ```rate``` and ```rateints``` data products for the simulation we used in module 1. Looking above, we can see that these types of data use the ImageModel and the CubeModel, respectively."
+ "Let's take a closer look at some data products. We've already explored uncalibrated data in part 1. Now, let's take a look at the output products for Stage 1 processing. As our first example, we can grab the ```rate``` and ```rateints``` data products for the simulation we used in module 1. Looking above, we can see that these types of data use the ImageModel and the CubeModel, respectively."
]
},
{
@@ -414,7 +332,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the data into models (use: rate_image, rateints_cube)\n"
+ "# Load the integration-averaged data into a model (use: rate_image)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the data for individual integrations into a model (use: rateints_image)\n"
]
},
{
@@ -430,7 +357,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the structure of the rate file\n"
+ "# Check out the structure of the rate file using .info()\n"
]
},
{
@@ -449,7 +376,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "These can be accessed the same way we described before:"
+ "These can be accessed the same way we described in Part 1:"
]
},
{
@@ -470,15 +397,6 @@
"# Print the shape of the science data array for the rateints image\n"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Grab the variance due to poisson noise \n"
- ]
- },
{
"cell_type": "code",
"execution_count": null,
@@ -488,61 +406,11 @@
"# Create an image of the rate data\n"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Compare that to an image of one integration for the rateints data\n"
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also examine the ramp for a pixel with the bias drift removed using the optional 4D ```_ramp.fits``` file to revisit the up-the-ramp sampling, with detector corrections applied: "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Use: integration, pixel_y, pixel_x, group\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Load the ramp_file into the RampModel and set up arrays to plot (use: ramp_model, groups, signal_adu)\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the ramp\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's take a look at the metadata for our output products, but rather than using the standard FITS methods, let's use the data model to access the information. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The metadata has been updated after going through Stage 1 processing: "
+ "Now, let's take a look at the metadata for our output products. The metadata has been updated after going through Stage 1 processing: "
]
},
{
@@ -571,7 +439,8 @@
},
"outputs": [],
"source": [
- "# Find datamodel equivalent of the FITS keyword indicating that the linearity correction was done (S_LINEAR)\n"
+ "# Find datamodel equivalent of the FITS keyword indicating that the \n",
+ "# linearity correction was done (S_LINEAR)\n"
]
},
{
@@ -596,16 +465,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, let's take a look at some spectroscopic data -- how about a NIRCam WFSS dispersed image? At this stage, the structure will be roughly the same as for our other image example."
+ "### D.-Exercise 1\n",
+ "Now, you try it!"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "# Load the WFSS image into the appropriate model (use: wfss_image)\n"
+ "Let's take a look at some spectroscopic data -- how about a NIRCam WFSS dispersed image? At this stage, the structure will be roughly the same as for our other image example."
]
},
{
@@ -614,7 +482,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the data\n"
+ "# Load the WFSS image into the appropriate model (hint: ImageModel)\n"
]
},
{
@@ -623,46 +491,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the structure of the model\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
+ "# What are the arrays associated with this data? (hint: .info())\n"
]
},
{
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "5.-Associations\n",
- "------------------\n",
- "\n",
- "Now that we're moving on to examine data products for Stage 2 and Stage 3 processing, it would be a good time to mention JWST associations, since the association files are a part of the JWST data products used to process data through Stage 2 and Stage 3. Associations are basically just lists of files, mostly exposures, that are related in some way. For JWST, associations have the following characteristics:\n",
- "\n",
- "* Relationships between multiple exposures are captured in an association.\n",
- "* An association is a means of identifying a set of exposures that belong together and may be dependent upon one another.\n",
- "* The association concept permits exposures to be calibrated, archived, retrieved, and reprocessed as a set rather than as individual objects.\n",
- "\n",
- "In general, it takes many exposures to make up a single observation, and an entire program is made up of a large number of observations. Given a set of exposures for a program, there is a tool that groups the exposures into individual associations. These associations are then used as input to the Stage 2 and 3 calibration steps to perform the transformation from exposure-based data to source-based, high(er) signal-to-noise data. The association used to process data is available in MAST as part of the \"Info\" data product category. You can read more about associations [here](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/index.html). "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "An example of a Stage 2 association is shown [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level2_asn_technical.html#example-association), along with a [Stage 3 association](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_technical.html#example-association). Unless you are generating your own data or simulations, you will probably not need to create an association file, because you will have the option to retrieve association files from MAST along with your data for reprocessing. \n",
- "\n",
- "However, if you do want to create an association, there are also [command line tools](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/asn_from_list.html) included in the pipeline software that help with generating associations for manually running the pipeline. "
- ]
- },
- {
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "Now that that's out of the way, let's continue our data products journey. "
+ "# Create an image of the WFSS data using our create_image function\n"
]
},
{
@@ -676,12 +514,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "6.-Data products: stage 2 (calibrated exposures)\n",
+ "4.-Data products: stage 2 (calibrated exposures)\n",
"------------------\n",
"\n",
"The paths through the pipeline begin to diverge during Stage 2 for different observing modes. This stage applies physical corrections and calibrations to individual exposures to produce fully calibrated (unrectified) exposures, and the pipeline module used depends on the exposure type: either imaging or spectroscopy. More information can be found in the [JWST User Documentation](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
- "## 6.1.-Imaging\n",
+ "Detailed information about imaging and spectroscopic data products for this stage are in the software Read-the-Docs:\n",
+ "* [Imaging](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_image2.html#inputs)\n",
+ "* [Spectroscopy](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_spec2.html#inputs)\n",
+ "\n",
+ "## 4.1.-Imaging\n",
"\n",
"Stage 2 image processing applies additional instrumental corrections and calibrations that result in a fully calibrated individual exposure. Non-time series exposures use the CALWEBB_IMAGE2 module, which applies all applicable steps to the data. The CALWEBB_TSO-IMAGE2 module, on the other hand, should be used for time series exposures, for which some steps are set to be skipped by default. Both modules call the Image2Pipeline; the only difference is which steps are applied.\n",
"\n",
@@ -735,15 +577,6 @@
"# Load the calibrated image into a model (use: cal_image)\n"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the image\n"
- ]
- },
{
"cell_type": "code",
"execution_count": null,
@@ -767,16 +600,7 @@
"* ```var_poisson```\n",
"* ```var_rnoise```\n",
"\n",
- "Also notice the ```bunit_data``` and ```bunit_err``` metadata values - those provide the units for the data. The metadata and data arrays can be accessed in the way we described before:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Grab the flat variance array\n"
+ "Also notice the ```bunit_data``` and ```bunit_err``` metadata values - those provide the units for the data. These have been updated for the Stage 2 data products."
]
},
{
@@ -792,25 +616,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "You'll notice in the metadata that there is more information -- for example, the association file name, data units, and WCS information. We'll revisit the WCS in the last section of this module. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# Check out the entire metadata list\n"
+ "You'll also notice in the metadata that there is more information -- for example, the association file name, data units, and WCS information. We'll revisit the WCS in the last section of this module. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 6.2.-Spectroscopy\n",
+ "## 4.2.-Spectroscopy\n",
"\n",
"Stage 2 spectroscopic processing applies additional instrumental corrections and calibrations to countrate products that result in a fully calibrated individual exposures. There are two unique configurations (meaning, the steps applied and the order they are applied in) used to control this pipeline, depending on whether the data are to be treated as time series observations. Non-time series exposures use the CALWEBB_SPEC2 configuration, which applies all applicable steps to the data. The CALWEBB_TSO-SPEC2 configuration, on the other hand, should be used for time series exposures, which skips some steps by default. Both configurations call the Spec2Pipeline module; the only difference is which steps are applied.\n",
"\n",
@@ -878,7 +691,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the calibrated WFSS data into a model (use: cal_wfss) \n"
+ "# Load the calibrated WFSS data into a MultiSpecModel (use: cal_wfss) \n"
]
},
{
@@ -887,7 +700,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the structure\n"
+ "# Check out the structure using .info()\n"
]
},
{
@@ -897,17 +710,6 @@
"Here, we no longer have the ```data``` array, because the model contains extracted spectral data for one or more slits/sources."
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# What's in slits?\n"
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -923,7 +725,7 @@
},
"outputs": [],
"source": [
- "# Choose a slit, say slit #10, and check out all the meta data (use: slit_number)\n"
+ "# Choose a slit, say slit #12, and look at all the meta data (use: slit_number)"
]
},
{
@@ -932,7 +734,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Print the source ID, spectral order, bounding box, source position, and data mean\n"
+ "# Print the source ID and spectral order\n"
]
},
{
@@ -941,14 +743,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the WCS information for a particular column and row (use: column, row, ra, dec, wavelength, order)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "What does this slit look like? "
+ "# Print the source position on the detector using source_xpos, source_ypos\n"
]
},
{
@@ -957,16 +752,14 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of this slit\n"
+ "# Look at the WCS info for a column and row (100, 4) (use: ra, dec, wavelength, order)\n"
]
},
{
"cell_type": "markdown",
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"source": [
- "Or plot one column of the dispersed image for our slit:"
+ "What does this slit look like? "
]
},
{
@@ -975,32 +768,23 @@
"metadata": {},
"outputs": [],
"source": [
- "# Plot one column of the slit\n"
+ "# Create an image of this slit\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "What about the 1D extracted spectral data product, the ```_x1d.fits``` file? At first glance using FITS, this file can appear very complicated because there is one extension for each source and spectral order:"
+ "Now, let's take a look at the 1D extracted spectral data product, the ```_x1d.fits``` file. At first glance using FITS, this file can appear very complicated because there is one extension for each extracted source and spectral order:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "# Use FITS to examine the structure of this file \n"
- ]
- },
- {
- "cell_type": "markdown",
"metadata": {},
+ "outputs": [],
"source": [
- "However, the ```MultiSpecModel``` makes it much easier to work with this file:"
+ "# Use fits.info() to look at the WFSS x1d.fits file, which is \"wfss_x1d_file[1]\"\n"
]
},
{
@@ -1009,23 +793,24 @@
"metadata": {},
"outputs": [],
"source": [
- "# Switch to a datamodel (use: spec)\n"
+ "# Get the source ID and spectral order for extension 3 w/ FITS \n",
+ "# (use: headers, SOURCEID, SPORDER)\n"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "#What's the shape of spec.spec?\n"
+ "Now, load the ```x1d.fits``` file above into a JWST data model. "
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "We can choose our source and the spectral order, and plot the spectrum:"
+ "# Open the same file using a MultiSpecModel (use: spec)\n"
]
},
{
@@ -1034,18 +819,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the source ID and spectral order, just use the same slit_number\n"
+ "# How many spectra are in the model? \n"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
- "# Plot the spectrum\n"
+ "# Get the source ID and spectral order for slit 3 using the model \n"
]
},
{
@@ -1059,7 +842,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "7.-WCS deep dive\n",
+ "5.-WCS deep dive\n",
"------------------\n",
"\n",
"The first step in Stage 2 processing (\"Assign WCS\") is where the information to transfer the pixel coordinates to astronomical coordinates (e.g., RA and Dec) is added to the data. The WCS information and distortion model are provided by instrument- and detector- specific calibration reference files. The data itself is not modified by this step, it just associates a WCS object with each science exposure. The WCS object transforms positions in the detector frame to positions in a world coordinate frame - ICRS and wavelength. In general, there may be intermediate coordinate frames depending on the instrument. The WCS is saved in the ASDF extension of the FITS file and can be accessed as an attribute of the meta object when the FITS file is opened as a data model.\n",
@@ -1095,7 +878,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the WCS info in the calibrated image model \n"
+ "# Look at the WCS info in the calibrated image model (cal_image)\n"
]
},
{
@@ -1111,7 +894,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# How does this compare to the FITS wcs? (use: image_fits_wcs)\n"
+ "# How does this compare to the FITS WCS? (use: image_fits_wcs)\n"
]
},
{
@@ -1256,16 +1039,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at CRVAL and CRPIX in the datamodel\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Get the WCS info for multiple pixels: (200, 4) and (30, 3) -- (use: ra, dec, wavelength, order)\n"
+ "# Get the WCS info for multiple pixels: (200, 4), (30, 3) - (use: ra, dec, wavelength, order)\n"
]
},
{
@@ -1313,23 +1087,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The next and final module will discuss the Stage 3 data products, the last stage of processing in the pipeline."
+ "### A.-Exercise 2\n",
+ "Now, you try it!"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "[Top of Page](#title_ID)"
+ "# Get the detector to world transform for our cal_image (hint: cal_image.meta.wcs)\n"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "8.-Exercise\n",
- "--------------------------------------------------------------------\n",
- "Now, you try it!"
+ "# Do the detector to world transformormation for pixel (3, 500)\n"
]
},
{
@@ -1338,7 +1115,7 @@
"metadata": {},
"outputs": [],
"source": [
- "#Load the exercise data using a model\n"
+ "# Now get the inverse transform from world to detector\n"
]
},
{
@@ -1347,7 +1124,62 @@
"metadata": {},
"outputs": [],
"source": [
- "# What instrument and mode are used here?\n"
+ "# Do the inverse transformation using your RA, Dec to get your pixel back\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "6.-Bonus: Associations\n",
+ "------------------\n",
+ "\n",
+ "When you're learning about JWST data products for Stage 2 and Stage 3 processing, it is a good time to mention JWST associations, since the association files are a part of the JWST data products used to process data through Stage 2 and Stage 3. Associations are basically just lists of files, mostly exposures, that are related in some way. For JWST, associations have the following characteristics:\n",
+ "\n",
+ "* Relationships between multiple exposures are captured in an association.\n",
+ "* An association is a means of identifying a set of exposures that belong together and may be dependent upon one another.\n",
+ "* The association concept permits exposures to be calibrated, archived, retrieved, and reprocessed as a set rather than as individual objects.\n",
+ "\n",
+ "In general, it takes many exposures to make up a single observation, and an entire program is made up of a large number of observations. Given a set of exposures for a program, there is a tool that groups the exposures into individual associations. These associations are then used as input to the Stage 2 and 3 calibration steps to perform the transformation from exposure-based data to source-based, high(er) signal-to-noise data. The association used to process data is available in MAST as part of the \"Info\" data product category. You can read more about associations [here](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/index.html). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "An example of a Stage 2 association is shown [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level2_asn_technical.html#example-association), along with a [Stage 3 association](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_technical.html#example-association). Unless you are generating your own data or simulations, you will probably not need to create an association file, because you will have the option to retrieve association files from MAST along with your data for reprocessing. \n",
+ "\n",
+ "However, if you do want to create an association, there are also [command line tools](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/asn_from_list.html) included in the pipeline software that help with generating associations for manually running the pipeline. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "7.-Exercise Solutions \n",
+ "--------------------------------------------------------------------\n",
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 1"
]
},
{
@@ -1356,7 +1188,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# What are the data units? \n"
+ "# Load the WFSS image into the appropriate model (hint: ImageModel)\n",
+ "wfss_image = datamodels.ImageModel(wfss_rate_file[1])"
]
},
{
@@ -1365,7 +1198,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Which calibration steps were applied?\n"
+ "# What are the arrays associated with this data? (hint: .info())\n",
+ "wfss_image.info()"
]
},
{
@@ -1374,7 +1208,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Choose a pixel or pixels and get the WCS information\n"
+ "# Create an image of the WFSS data using our create_image function\n",
+ "create_image(wfss_image)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
@@ -1383,7 +1225,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now get the detector to world transform\n"
+ "# Get the detector to world transform for our cal_image (hint: cal_image.meta.wcs)\n",
+ "d2w = cal_image.meta.wcs.get_transform('detector','world') "
]
},
{
@@ -1392,7 +1235,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Do the transformormation\n"
+ "# Do the detector to world transformormation for pixel (3, 500)\n",
+ "ra, dec = d2w(3, 500)"
]
},
{
@@ -1401,7 +1245,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now get the inverse transform\n"
+ "# Now get the inverse transform from world to detector\n",
+ "w2d = cal_image.meta.wcs.get_transform('world','detector') "
]
},
{
@@ -1410,7 +1255,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Do the transformation - do you get your pixel back? \n"
+ "# Do the inverse transformation using your RA, Dec to get your pixel back\n",
+ "w2d(ra, dec)"
]
},
{
@@ -1437,7 +1283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.2"
+ "version": "3.9.1"
}
},
"nbformat": 4,
diff --git a/pipeline_products_session/jwst-data-products-part2-static.ipynb b/pipeline_products_session/jwst-data-products-part2-solutions.ipynb
similarity index 74%
rename from pipeline_products_session/jwst-data-products-part2-static.ipynb
rename to pipeline_products_session/jwst-data-products-part2-solutions.ipynb
index 088d4c1..77c0fe2 100644
--- a/pipeline_products_session/jwst-data-products-part2-static.ipynb
+++ b/pipeline_products_session/jwst-data-products-part2-solutions.ipynb
@@ -7,27 +7,45 @@
"\n",
"# JWST Data Products: Calibrated Individual Exposures and WCS\n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) with exerpts from Espinoza, Sosey | **Latest update**: March 30, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) with exerpts from Espinoza, Sosey | **Latest update**: April 26, 2021.\n",
"\n",
+ "\n",
+ "
Notebook Goals
\n",
+ "
Using JWST data models, we will:
\n",
+ "
\n",
+ " - Explore Stage 1 data products (detector corrections)
\n",
+ " - Examine Stage 2 imaging and spectroscopic data products (calibrated individual exposures)
\n",
+ " - Take a closer look at WCS information for JWST data
\n",
+ " - Bonus information: JWST associations
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Data products: stage 1 (detector corrections)](#stage1)\n",
+ " 2. [Data in MAST](#mast)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Data products: stage 1 (detector corrections)](#stage1)\n",
" 1. [Input](#s1-input)\n",
" 2. [Output](#s1-output)\n",
- " 3. [Examining the products](#s1-examine)\n",
- "5. [Associations](#associations)\n",
- "6. [Data products: stage 2 (calibrated exposures)](#stage2)\n",
+ " 3. [Examining the pipeline products](#s1-examine)\n",
+ " 4. [Exercise 1](#exercise-1)\n",
+ "4. [Data products: stage 2 (calibrated exposures)](#stage2)\n",
" 1. [Imaging](#s2-imaging)\n",
" 1. [Input](#s2-imaging-input)\n",
" 2. [Output](#s2-imaging-output)\n",
" 2. [Spectroscopy](#s2-spectroscopy)\n",
" 1. [Input](#s2-spectroscopy-input)\n",
" 2. [Output](#s2-spectroscopy-output)\n",
- "7. [WCS deep dive](#wcs)\n",
- "8. [Exercise](#exercise)"
+ "5. [WCS deep dive](#wcs)\n",
+ " 1. [Exercise 2](#exercise-2) \n",
+ "6. [Bonus: Associations](#associations)\n",
+ "7. [Exercise Solutions](#solutions)"
]
},
{
@@ -41,11 +59,27 @@
"\n",
"### A.-Resources\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -119,17 +153,19 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
" \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " if 'IMAGE' in data_2d.meta.exposure.type:\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=0, vmax=10)\n",
+ " \n",
+ " elif 'WFSS' in data_2d.meta.exposure.type:\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=-0.05, vmax=0.5) \n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -139,7 +175,7 @@
"\n",
" fig.tight_layout()\n",
" plt.subplots_adjust(left=0.15)\n",
- " plt.colorbar(label='DN')"
+ " plt.colorbar(label=data_2d.meta.bunit_data)"
]
},
{
@@ -148,21 +184,15 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_slit_image(data_2d, slit_number, vmin=None, vmax=None, title=None):\n",
+ "def create_slit_image(data_2d, slit_number, title=None):\n",
" ''' Function to generate a 2D image of a particular slit.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
" \n",
- " if vmin and vmax:\n",
- " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " plt.colorbar(label='DN/sec') \n",
- " else:\n",
- " minimum = data_2d.slits[slit_number].data.min()\n",
- " maximum = data_2d.slits[slit_number].data.max()\n",
- " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray', vmin=minimum, vmax=maximum)\n",
- " plt.colorbar(label='DN/sec') \n",
+ " plt.imshow(data_2d.slits[slit_number].data, origin='lower', cmap='gray',vmin=-0.1, vmax=0.3)\n",
+ " plt.colorbar(label=data_2d.meta.bunit_data) \n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -177,120 +207,6 @@
" plt.subplots_adjust(left=0.15)"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_ramp(groups, signal, xpixel=None, ypixel=None, title=None):\n",
- " ''' Function to generate the ramp for pixel.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(8, 8))\n",
- " ax = plt.subplot()\n",
- " if xpixel and ypixel:\n",
- " plt.plot(groups, signal, marker='o', label='Pixel ('+str(xpixel)+','+str(ypixel)+')') \n",
- " plt.legend(loc=2)\n",
- "\n",
- " else:\n",
- " plt.plot(groups, signal, marker='o')\n",
- " \n",
- " plt.xlabel('Groups')\n",
- " plt.ylabel('Signal (DN)')\n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_column(data_2d, column, title=None):\n",
- " ''' Function to generate a plot for one column in a dispersed image.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(10, 5))\n",
- " ax = plt.subplot()\n",
- " plt.plot(data_2d[:,column], label='Column '+str(column))\n",
- " \n",
- " plt.xlabel('Pixel row')\n",
- " plt.ylabel('Column values')\n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)\n",
- " else:\n",
- " plt.title('WFSS plot of one column in dispersed image')\n",
- " \n",
- " plt.legend()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def plot_spectra(spec, number, median_filter=None, title=None):\n",
- " ''' Function to generate the spectrum for a slit.\n",
- " '''\n",
- " \n",
- " fig = plt.figure(figsize=(10, 5))\n",
- " ax = plt.subplot()\n",
- " \n",
- " if median_filter:\n",
- " plt.plot(spec.spec[number].spec_table['WAVELENGTH'], medfilt(spec.spec[number].spec_table['FLUX'],median_filter)) \n",
- " \n",
- " else: \n",
- " plt.plot(spec.spec[number].spec_table['WAVELENGTH'], spec.spec[number].spec_table['FLUX']) \n",
- "\n",
- " \n",
- " plt.xlabel('Wavelength (um)')\n",
- " plt.ylabel('Flux')\n",
- " \n",
- " plt.subplots_adjust(left=0.15)\n",
- " \n",
- " if title:\n",
- " plt.title(title)\n",
- " else:\n",
- " title='Spectrum for Source '+str(spec.spec[number].source_id)+', Spectral Order '+str(spec.spec[number].spectral_order)\n",
- " plt.title(title)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -302,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use calibrated NIRCam simulated imaging and wide field slitless spectroscopy (WFSS) exposures that are stored in Box. Let's grab the data:"
@@ -356,11 +272,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Data products: stage 1 (detector corrections)\n",
+ "3.-Data products: stage 1 (detector corrections)\n",
"------------------\n",
"\n",
"All JWST data, regardless of the instrument or mode (with the exception of a few specific engineering or calibration cases), is processed through the CALWEBB_DETECTOR1 module, which is Stage 1 of the pipeline. A number of instrument signatures are accounted for in this stage, such as bias corrections and cosmic ray flagging, and slopes are fit to the corrected ramps. More information can be found in the [JWST User Documentation for CALWEBB_DETECTOR1](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing/calwebb_detector1). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
+ "Detailed information about data products for this stage are in [the software Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_detector1.html#inputs).\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -398,14 +316,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### C.-Examining the products"
+ "### C.-Examining the pipeline products"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's take a closer look at some data products. As our first example, we can grab the ```rate``` and ```rateints``` data products for the simulation we used in module 1. Looking above, we can see that these types of data use the ImageModel and the CubeModel, respectively."
+ "Let's take a closer look at some data products. We've already explored uncalibrated data in part 1. Now, let's take a look at the output products for Stage 1 processing. As our first example, we can grab the ```rate``` and ```rateints``` data products for the simulation we used in module 1. Looking above, we can see that these types of data use the ImageModel and the CubeModel, respectively."
]
},
{
@@ -414,9 +332,18 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the data into models (use: rate_image, rateints_cube)\n",
- "rate_image = datamodels.ImageModel(rate_file[1])\n",
- "rateints_cube = datamodels.CubeModel(rateints_file[1])"
+ "# Load the integration-averaged data into a model (use: rate_image)\n",
+ "rate_image = datamodels.ImageModel(rate_file[1])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the data for individual integrations into a model (use: rateints_image)\n",
+ "rateints_image = datamodels.CubeModel(rateints_file[1])"
]
},
{
@@ -432,7 +359,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the structure of the rate file\n",
+ "# Check out the structure of the rate file using .info()\n",
"rate_image.info()"
]
},
@@ -452,7 +379,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "These can be accessed the same way we described before:"
+ "These can be accessed the same way we described in Part 1:"
]
},
{
@@ -472,18 +399,7 @@
"outputs": [],
"source": [
"# Print the shape of the science data array for the rateints image\n",
- "rateints_cube.data.shape"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Grab the variance due to poisson noise \n",
- "variance_poisson = rate_image.var_poisson\n",
- "variance_poisson"
+ "rateints_image.data.shape"
]
},
{
@@ -493,73 +409,14 @@
"outputs": [],
"source": [
"# Create an image of the rate data\n",
- "create_image(rate_image.data, 0, 10, title=\"2D image data product\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Compare that to an image of one integration for the rateints data\n",
- "create_image(rateints_cube.data[-1,:,:], 0, 10, title=\"3D cube data product\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can also examine the ramp for a pixel with the bias drift removed using the optional 4D ```_ramp.fits``` file to revisit the up-the-ramp sampling, with detector corrections applied: "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Use: integration, pixel_y, pixel_x, group\n",
- "integration = 0\n",
- "pixel_y = 741\n",
- "pixel_x = 1798\n",
- "group = -1"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Load the ramp_file into the RampModel and set up arrays to plot (use: ramp_model, groups, signal_adu)\n",
- "ramp_model = datamodels.RampModel(ramp_file[1])\n",
- "groups = np.arange(0, ramp_model.meta.exposure.ngroups)\n",
- "signal_adu = ramp_model.data[integration, :, pixel_y, pixel_x]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the ramp\n",
- "plot_ramp(groups, signal_adu, xpixel=pixel_x, ypixel=pixel_y, title=\"Optional ramp data product\")"
+ "create_image(rate_image)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's take a look at the metadata for our output products, but rather than using the standard FITS methods, let's use the data model to access the information. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The metadata has been updated after going through Stage 1 processing: "
+ "Now, let's take a look at the metadata for our output products. The metadata has been updated after going through Stage 1 processing: "
]
},
{
@@ -589,7 +446,8 @@
},
"outputs": [],
"source": [
- "# Find datamodel equivalent of the FITS keyword indicating that the linearity correction was done (S_LINEAR)\n",
+ "# Find datamodel equivalent of the FITS keyword indicating that the \n",
+ "# linearity correction was done (S_LINEAR)\n",
"rate_image.find_fits_keyword('S_LINEAR') "
]
},
@@ -600,7 +458,7 @@
"outputs": [],
"source": [
"# Search the datamodel for information related to units \n",
- "rate_image.search_schema('unit')"
+ "rate_image.search(key='unit')"
]
},
{
@@ -617,17 +475,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, let's take a look at some spectroscopic data -- how about a NIRCam WFSS dispersed image? At this stage, the structure will be roughly the same as for our other image example."
+ "### D.-Exercise 1\n",
+ "Now, you try it!"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "# Load the WFSS image into the appropriate model (use: wfss_image)\n",
- "wfss_image = datamodels.ImageModel(wfss_rate_file[1])"
+ "Let's take a look at some spectroscopic data -- how about a NIRCam WFSS dispersed image? At this stage, the structure will be roughly the same as for our other image example."
]
},
{
@@ -636,8 +492,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the data\n",
- "create_image(wfss_image.data, vmin=-0.05, vmax=0.5, title=\"2D WFSS data before Stage 2\")"
+ "# Load the WFSS image into the appropriate model (hint: ImageModel)\n"
]
},
{
@@ -646,47 +501,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the structure of the model\n",
- "wfss_image.info()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "5.-Associations\n",
- "------------------\n",
- "\n",
- "Now that we're moving on to examine data products for Stage 2 and Stage 3 processing, it would be a good time to mention JWST associations, since the association files are a part of the JWST data products used to process data through Stage 2 and Stage 3. Associations are basically just lists of files, mostly exposures, that are related in some way. For JWST, associations have the following characteristics:\n",
- "\n",
- "* Relationships between multiple exposures are captured in an association.\n",
- "* An association is a means of identifying a set of exposures that belong together and may be dependent upon one another.\n",
- "* The association concept permits exposures to be calibrated, archived, retrieved, and reprocessed as a set rather than as individual objects.\n",
- "\n",
- "In general, it takes many exposures to make up a single observation, and an entire program is made up of a large number of observations. Given a set of exposures for a program, there is a tool that groups the exposures into individual associations. These associations are then used as input to the Stage 2 and 3 calibration steps to perform the transformation from exposure-based data to source-based, high(er) signal-to-noise data. The association used to process data is available in MAST as part of the \"Info\" data product category. You can read more about associations [here](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/index.html). "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "An example of a Stage 2 association is shown [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level2_asn_technical.html#example-association), along with a [Stage 3 association](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_technical.html#example-association). Unless you are generating your own data or simulations, you will probably not need to create an association file, because you will have the option to retrieve association files from MAST along with your data for reprocessing. \n",
- "\n",
- "However, if you do want to create an association, there are also [command line tools](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/asn_from_list.html) included in the pipeline software that help with generating associations for manually running the pipeline. "
+ "# What are the arrays associated with this data? (hint: .info())\n"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "Now that that's out of the way, let's continue our data products journey. "
+ "# Create an image of the WFSS data using our create_image function\n"
]
},
{
@@ -700,12 +524,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "6.-Data products: stage 2 (calibrated exposures)\n",
+ "4.-Data products: stage 2 (calibrated exposures)\n",
"------------------\n",
"\n",
"The paths through the pipeline begin to diverge during Stage 2 for different observing modes. This stage applies physical corrections and calibrations to individual exposures to produce fully calibrated (unrectified) exposures, and the pipeline module used depends on the exposure type: either imaging or spectroscopy. More information can be found in the [JWST User Documentation](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
- "## 6.1.-Imaging\n",
+ "Detailed information about imaging and spectroscopic data products for this stage are in the software Read-the-Docs:\n",
+ "* [Imaging](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_image2.html#inputs)\n",
+ "* [Spectroscopy](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_spec2.html#inputs)\n",
+ "\n",
+ "## 4.1.-Imaging\n",
"\n",
"Stage 2 image processing applies additional instrumental corrections and calibrations that result in a fully calibrated individual exposure. Non-time series exposures use the CALWEBB_IMAGE2 module, which applies all applicable steps to the data. The CALWEBB_TSO-IMAGE2 module, on the other hand, should be used for time series exposures, for which some steps are set to be skipped by default. Both modules call the Image2Pipeline; the only difference is which steps are applied.\n",
"\n",
@@ -760,16 +588,6 @@
"cal_image = datamodels.ImageModel(cal_file[1])"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the image\n",
- "create_image(cal_image.data[:,:], 0, 10, title=\"2D calibrated image\")"
- ]
- },
{
"cell_type": "code",
"execution_count": null,
@@ -794,17 +612,7 @@
"* ```var_poisson```\n",
"* ```var_rnoise```\n",
"\n",
- "Also notice the ```bunit_data``` and ```bunit_err``` metadata values - those provide the units for the data. The metadata and data arrays can be accessed in the way we described before:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Grab the flat variance array\n",
- "variance_flat = cal_image.var_flat"
+ "Also notice the ```bunit_data``` and ```bunit_err``` metadata values - those provide the units for the data. These have been updated for the Stage 2 data products."
]
},
{
@@ -821,26 +629,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "You'll notice in the metadata that there is more information -- for example, the association file name, data units, and WCS information. We'll revisit the WCS in the last section of this module. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# Check out the entire metadata list\n",
- "cal_image.meta.instance"
+ "You'll also notice in the metadata that there is more information -- for example, the association file name, data units, and WCS information. We'll revisit the WCS in the last section of this module. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 6.2.-Spectroscopy\n",
+ "## 4.2.-Spectroscopy\n",
"\n",
"Stage 2 spectroscopic processing applies additional instrumental corrections and calibrations to countrate products that result in a fully calibrated individual exposures. There are two unique configurations (meaning, the steps applied and the order they are applied in) used to control this pipeline, depending on whether the data are to be treated as time series observations. Non-time series exposures use the CALWEBB_SPEC2 configuration, which applies all applicable steps to the data. The CALWEBB_TSO-SPEC2 configuration, on the other hand, should be used for time series exposures, which skips some steps by default. Both configurations call the Spec2Pipeline module; the only difference is which steps are applied.\n",
"\n",
@@ -908,7 +704,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the calibrated WFSS data into a model (use: cal_wfss) \n",
+ "# Load the calibrated WFSS data into a MultiSpecModel (use: cal_wfss) \n",
"cal_wfss = datamodels.MultiSpecModel(wfss_cal_file[1])"
]
},
@@ -918,7 +714,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the structure\n",
+ "# Check out the structure using .info()\n",
"cal_wfss.info()"
]
},
@@ -929,18 +725,6 @@
"Here, we no longer have the ```data``` array, because the model contains extracted spectral data for one or more slits/sources."
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "# What's in slits?\n",
- "cal_wfss.slits"
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -956,9 +740,9 @@
},
"outputs": [],
"source": [
- "# Choose a slit, say slit #10, and check out all the meta data (use: slit_number)\n",
- "slit_number = 12\n",
- "cal_wfss.slits[slit_number].meta.instance"
+ "# Choose a slit, say slit #12, and look at all the meta data (use: slit_number)\n",
+ "s_num = 12\n",
+ "cal_wfss.slits[s_num].meta.instance"
]
},
{
@@ -967,14 +751,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Print the source ID, spectral order, bounding box, source position, and data mean\n",
- "print('\\nSlit number: ', slit_number)\n",
- "print('Source ID: ', cal_wfss.slits[slit_number].source_id)\n",
- "print('Spectral order: ', cal_wfss.slits[slit_number].meta.wcsinfo.spectral_order)\n",
- "print('Bounding box: ', cal_wfss.slits[slit_number].meta.wcs.bounding_box)\n",
- "print('Source X position, Y position (full frame coordinates): ', cal_wfss.slits[slit_number].source_xpos,cal_wfss.slits[slit_number].source_ypos)\n",
- "print('Data average: ', cal_wfss.slits[slit_number].data.mean())\n",
- "print('\\n')"
+ "# Print the source ID and spectral order\n",
+ "cal_wfss.slits[s_num].source_id, cal_wfss.slits[s_num].meta.wcsinfo.spectral_order"
]
},
{
@@ -983,20 +761,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the WCS information for a particular column and row (use: column, row, ra, dec, wavelength, order)\n",
- "column, row = 100, 4\n",
- "ra, dec, wavelength, order = cal_wfss.slits[slit_number].meta.wcs(column, row)\n",
- "print('RA: ', ra)\n",
- "print('Dec: ', dec)\n",
- "print('Wavelength: ',wavelength)\n",
- "print('Order: ',order)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "What does this slit look like? "
+ "# Print the source position on the detector using source_xpos, source_ypos\n",
+ "cal_wfss.slits[s_num].source_xpos,cal_wfss.slits[s_num].source_ypos"
]
},
{
@@ -1005,17 +771,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of this slit\n",
- "create_slit_image(cal_wfss, slit_number, vmin=-0.1, vmax=0.3)"
+ "# Look at the WCS info for a column and row (100, 4) (use: ra, dec, wavelength, order)\n",
+ "ra, dec, wavelength, order = cal_wfss.slits[s_num].meta.wcs(100, 4)\n",
+ "ra, dec, wavelength, order"
]
},
{
"cell_type": "markdown",
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"source": [
- "Or plot one column of the dispersed image for our slit:"
+ "What does this slit look like? "
]
},
{
@@ -1024,39 +789,25 @@
"metadata": {},
"outputs": [],
"source": [
- "# Plot one column of the slit\n",
- "plot_column(cal_wfss.slits[slit_number].data, column)"
+ "# Create an image of this slit\n",
+ "create_slit_image(cal_wfss, s_num)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "What about the 1D extracted spectral data product, the ```_x1d.fits``` file? At first glance using FITS, this file can appear very complicated because there is one extension for each source and spectral order:"
+ "Now, let's take a look at the 1D extracted spectral data product, the ```_x1d.fits``` file. At first glance using FITS, this file can appear very complicated because there is one extension for each extracted source and spectral order:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
- "outputs": [],
- "source": [
- "# Use FITS to examine the structure of this file \n",
- "with fits.open(wfss_x1d_file[1]) as h:\n",
- " h.info()\n",
- " for i in np.arange(1,len(h)-1):\n",
- " print('\\nExtension: ',i)\n",
- " print('Source ID: ',h[i].header['SOURCEID']) \n",
- " print('Spectral Order: ',h[i].header['SPORDER'])"
- ]
- },
- {
- "cell_type": "markdown",
"metadata": {},
+ "outputs": [],
"source": [
- "However, the ```MultiSpecModel``` makes it much easier to work with this file:"
+ "# Use fits.info() to look at the WFSS x1d.fits file, which is \"wfss_x1d_file[1]\"\n",
+ "fits.info(wfss_x1d_file[1])"
]
},
{
@@ -1065,25 +816,28 @@
"metadata": {},
"outputs": [],
"source": [
- "# Switch to a datamodel (use: spec)\n",
- "spec = datamodels.MultiSpecModel(wfss_x1d_file[1])"
+ "# Get the source ID and spectral order for extension 3 w/ FITS \n",
+ "# (use: headers, SOURCEID, SPORDER)\n",
+ "headers = fits.getheader(wfss_x1d_file[1], 3)\n",
+ "headers['SOURCEID'], headers['SPORDER']"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "#What's the shape of spec.spec?\n",
- "print(len(spec.spec))"
+ "Now, load the ```x1d.fits``` file above into a JWST data model. "
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "We can choose our source and the spectral order, and plot the spectrum:"
+ "# Open the same file using a MultiSpecModel (use: spec)\n",
+ "spec = datamodels.MultiSpecModel(wfss_x1d_file[1])\n",
+ "spec.info()"
]
},
{
@@ -1092,21 +846,18 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the source ID and spectral order, just use the same slit_number\n",
- "print('Source ID: ', spec.spec[slit_number].source_id)\n",
- "print('Spectral order: ', spec.spec[slit_number].spectral_order)"
+ "# How many spectra are in the model? \n",
+ "len(spec.spec)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
- "# Plot the spectrum\n",
- "plot_spectra(spec, slit_number, median_filter=11) "
+ "# Get the source ID and spectral order for slit 3 using the model \n",
+ "spec.spec[3].source_id, spec.spec[3].spectral_order"
]
},
{
@@ -1120,7 +871,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "7.-WCS deep dive\n",
+ "5.-WCS deep dive\n",
"------------------\n",
"\n",
"The first step in Stage 2 processing (\"Assign WCS\") is where the information to transfer the pixel coordinates to astronomical coordinates (e.g., RA and Dec) is added to the data. The WCS information and distortion model are provided by instrument- and detector- specific calibration reference files. The data itself is not modified by this step, it just associates a WCS object with each science exposure. The WCS object transforms positions in the detector frame to positions in a world coordinate frame - ICRS and wavelength. In general, there may be intermediate coordinate frames depending on the instrument. The WCS is saved in the ASDF extension of the FITS file and can be accessed as an attribute of the meta object when the FITS file is opened as a data model.\n",
@@ -1156,7 +907,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at the WCS info in the calibrated image model \n",
+ "# Look at the WCS info in the calibrated image model (cal_image)\n",
"cal_image.meta.wcs"
]
},
@@ -1173,7 +924,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# How does this compare to the FITS wcs? (use: image_fits_wcs)\n",
+ "# How does this compare to the FITS WCS? (use: image_fits_wcs)\n",
"image_fits_wcs = cal_image.get_fits_wcs()\n",
"image_fits_wcs"
]
@@ -1314,7 +1065,7 @@
"outputs": [],
"source": [
"# What frames are available?\n",
- "cal_wfss.slits[slit_number].meta.wcs.available_frames"
+ "cal_wfss.slits[s_num].meta.wcs.available_frames"
]
},
{
@@ -1324,7 +1075,7 @@
"outputs": [],
"source": [
"# What was the bounding box used for the cutout?\n",
- "cal_wfss.slits[slit_number].meta.wcs.bounding_box"
+ "cal_wfss.slits[s_num].meta.wcs.bounding_box"
]
},
{
@@ -1333,19 +1084,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Look at CRVAL and CRPIX in the datamodel\n",
- "print('CRVAL: ', cal_wfss.meta.wcsinfo.crval1, cal_wfss.meta.wcsinfo.crval2)\n",
- "print('CRPIX: ', cal_wfss.meta.wcsinfo.crpix1, cal_wfss.meta.wcsinfo.crpix2)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Get the WCS info for multiple pixels: (200, 4) and (30, 3) -- (use: ra, dec, wavelength, order)\n",
- "ra, dec, wavelength, order = cal_wfss.slits[slit_number].meta.wcs([10, 4], [6, 3])\n",
+ "# Get the WCS info for multiple pixels: (200, 4), (30, 3) - (use: ra, dec, wavelength, order)\n",
+ "ra, dec, wavelength, order = cal_wfss.slits[s_num].meta.wcs([10, 4], [6, 3])\n",
"ra, dec, wavelength, order"
]
},
@@ -1363,7 +1103,7 @@
"outputs": [],
"source": [
"# Get the world to detector transform (use: world_to_detector_ss)\n",
- "world_to_detector_ss = cal_wfss.slits[slit_number].meta.wcs.get_transform('world','detector') "
+ "world_to_detector_ss = cal_wfss.slits[s_num].meta.wcs.get_transform('world','detector') "
]
},
{
@@ -1391,7 +1131,7 @@
"outputs": [],
"source": [
"# Now, do the inverse (use: detector_to_world_ss)\n",
- "detector_to_world_ss = cal_wfss.slits[slit_number].meta.wcs.get_transform('detector','world')\n",
+ "detector_to_world_ss = cal_wfss.slits[s_num].meta.wcs.get_transform('detector','world')\n",
"detector_to_world_ss(x0, y0, wave, order2)"
]
},
@@ -1399,23 +1139,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The next and final module will discuss the Stage 3 data products, the last stage of processing in the pipeline."
+ "### A.-Exercise 2\n",
+ "Now, you try it!"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "[Top of Page](#title_ID)"
+ "# Get the detector to world transform for our cal_image (hint: cal_image.meta.wcs)\n"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "8.-Exercise\n",
- "--------------------------------------------------------------------\n",
- "Now, you try it!"
+ "# Do the detector to world transformormation for pixel (3, 500)\n"
]
},
{
@@ -1424,9 +1167,7 @@
"metadata": {},
"outputs": [],
"source": [
- "#Load the exercise data using a model\n",
- "with datamodels.open(demo_ex_file[1]) as exercise_data:\n",
- " exercise_data.info()"
+ "# Now get the inverse transform from world to detector\n"
]
},
{
@@ -1435,8 +1176,62 @@
"metadata": {},
"outputs": [],
"source": [
- "# What instrument and mode are used here?\n",
- "exercise_data.meta.instrument.name, exercise_data.meta.exposure.type"
+ "# Do the inverse transformation using your RA, Dec to get your pixel back\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "6.-Bonus: Associations\n",
+ "------------------\n",
+ "\n",
+ "When you're learning about JWST data products for Stage 2 and Stage 3 processing, it is a good time to mention JWST associations, since the association files are a part of the JWST data products used to process data through Stage 2 and Stage 3. Associations are basically just lists of files, mostly exposures, that are related in some way. For JWST, associations have the following characteristics:\n",
+ "\n",
+ "* Relationships between multiple exposures are captured in an association.\n",
+ "* An association is a means of identifying a set of exposures that belong together and may be dependent upon one another.\n",
+ "* The association concept permits exposures to be calibrated, archived, retrieved, and reprocessed as a set rather than as individual objects.\n",
+ "\n",
+ "In general, it takes many exposures to make up a single observation, and an entire program is made up of a large number of observations. Given a set of exposures for a program, there is a tool that groups the exposures into individual associations. These associations are then used as input to the Stage 2 and 3 calibration steps to perform the transformation from exposure-based data to source-based, high(er) signal-to-noise data. The association used to process data is available in MAST as part of the \"Info\" data product category. You can read more about associations [here](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/index.html). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "An example of a Stage 2 association is shown [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level2_asn_technical.html#example-association), along with a [Stage 3 association](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_technical.html#example-association). Unless you are generating your own data or simulations, you will probably not need to create an association file, because you will have the option to retrieve association files from MAST along with your data for reprocessing. \n",
+ "\n",
+ "However, if you do want to create an association, there are also [command line tools](https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/asn_from_list.html) included in the pipeline software that help with generating associations for manually running the pipeline. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "7.-Exercise Solutions \n",
+ "--------------------------------------------------------------------\n",
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 1"
]
},
{
@@ -1445,8 +1240,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# What are the data units? \n",
- "exercise_data.meta.bunit_data"
+ "# Load the WFSS image into the appropriate model (hint: ImageModel)\n",
+ "wfss_image = datamodels.ImageModel(wfss_rate_file[1])"
]
},
{
@@ -1455,8 +1250,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Which calibration steps were applied?\n",
- "exercise_data.meta.cal_step.instance"
+ "# What are the arrays associated with this data? (hint: .info())\n",
+ "wfss_image.info()"
]
},
{
@@ -1465,10 +1260,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Choose a pixel or pixels and get the WCS information\n",
- "det_x, det_y = 10, 5\n",
- "ra, dec, wave, order = exercise_data.slits[0].meta.wcs(det_x, det_y)\n",
- "ra, dec, wave, order "
+ "# Create an image of the WFSS data using our create_image function\n",
+ "create_image(wfss_image)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
@@ -1477,8 +1277,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now get the detector to world transform\n",
- "d2w = exercise_data.slits[0].meta.wcs.get_transform('detector','world') "
+ "# Get the detector to world transform for our cal_image (hint: cal_image.meta.wcs)\n",
+ "d2w = cal_image.meta.wcs.get_transform('detector','world') "
]
},
{
@@ -1487,8 +1287,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Do the transformormation\n",
- "ra, dec, wavelength, order = d2w(det_x, det_y, wave, order)"
+ "# Do the detector to world transformormation for pixel (3, 500)\n",
+ "ra, dec = d2w(3, 500)"
]
},
{
@@ -1497,8 +1297,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Now get the inverse transform\n",
- "w2d = exercise_data.slits[0].meta.wcs.get_transform('world','detector') "
+ "# Now get the inverse transform from world to detector\n",
+ "w2d = cal_image.meta.wcs.get_transform('world','detector') "
]
},
{
@@ -1507,8 +1307,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Do the transformation - do you get your pixel back? \n",
- "w2d(ra, dec, wavelength, order)"
+ "# Do the inverse transformation using your RA, Dec to get your pixel back\n",
+ "w2d(ra, dec)"
]
},
{
diff --git a/pipeline_products_session/jwst-data-products-part3-live.ipynb b/pipeline_products_session/jwst-data-products-part3-live.ipynb
index 44d3de3..f0915c2 100644
--- a/pipeline_products_session/jwst-data-products-part3-live.ipynb
+++ b/pipeline_products_session/jwst-data-products-part3-live.ipynb
@@ -7,35 +7,52 @@
"\n",
"# JWST Data Products: Ensemble Processing Products\n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: March 24, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: April 26, 2021.\n",
"\n",
+ "\n",
+ "
Notebook Goals
\n",
+ "
Using the final data products from the pipeline, we will:
\n",
+ "
\n",
+ " - Take a look at a JWST source catalog
\n",
+ " - Examine our final mosaicked image
\n",
+ " - Look at a our final spectral data product
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Data products: stage 3 (combined, rectified exposures)](#stage3)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Data products: stage 3 (combined, rectified exposures)](#stage3)\n",
" 1. [Imaging](#s3-imaging)\n",
" 1. [Input](#s3-imaging-input)\n",
" 2. [Output](#s3-imaging-output)\n",
" 2. [Spectroscopy](#s3-spectroscopy)\n",
" 1. [Input](#s3-spectroscopy-input)\n",
" 2. [Output](#s3-spectroscopy-output)\n",
- " 3. [Aperture Masking Interferometry (AMI)](#s3-ami)\n",
+ "4. [Examining the products](#examine)\n",
+ " 1. [Catalogs](#catalogs)\n",
+ " 1. [Exercise 1](#exercise-1)\n",
+ " 2. [Combined image](#comb-image)\n",
+ " 3. [Combined spectrum](#comb-spec)\n",
+ " 1. [Exercise 2](#exercise-2) \n",
+ "5. [Bonus: Other observing modes](#modes)\n",
+ " 1. [Aperture Masking Interferometry (AMI)](#s3-ami)\n",
" 1. [Input](#s3-ami-input)\n",
" 2. [Output](#s3-ami-output)\n",
- " 4. [Coronagraphy](#s3-coronagraphy)\n",
+ " 2. [Coronagraphy](#s3-coronagraphy)\n",
" 1. [Input](#s3-coronagraphy-input)\n",
" 2. [Output](#s3-coronagraphy-output)\n",
- " 5. [Time Series Observation (TSO)](#s3-tso)\n",
+ " 3. [Time Series Observation (TSO)](#s3-tso)\n",
" 1. [Input](#s3-tso-input)\n",
" 2. [Output](#s3-tso-output)\n",
- "5. [Examining the products](#examine)\n",
- " 1. [Catalogs](#catalogs)\n",
- " 2. [Combined image](#comb-image)\n",
- " 3. [Combined spectrum](#comb-spec)\n",
"6. [The end](#bye-bye)\n",
- "7. [Exercise](#exercise)"
+ "7. [Exercise solutions](#solutions)"
]
},
{
@@ -50,11 +67,26 @@
"### A.-Resources\n",
"\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -130,48 +162,14 @@
"metadata": {},
"outputs": [],
"source": [
- "def download_file(url):\n",
- " \"\"\"Download into the current working directory the\n",
- " file from Box given the direct URL\n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " url : str\n",
- " URL to the file to be downloaded\n",
- " \n",
- " Returns\n",
- " -------\n",
- " download_filename : str\n",
- " Name of the downloaded file\n",
- " \"\"\"\n",
- " response = requests.get(url, stream=True)\n",
- " if response.status_code != 200:\n",
- " raise RuntimeError(\"Wrong URL - {}\".format(url))\n",
- " download_filename = response.headers['Content-Disposition'].split('\"')[1]\n",
- " with open(download_filename, 'wb') as f:\n",
- " for chunk in response.iter_content(chunk_size=1024):\n",
- " if chunk:\n",
- " f.write(chunk)\n",
- " return download_filename"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=0, vmax=0.1)\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -181,7 +179,7 @@
"\n",
" fig.tight_layout()\n",
" plt.subplots_adjust(left=0.15)\n",
- " plt.colorbar(label='MJy/sr')"
+ " plt.colorbar(label=data_2d.meta.bunit_data)"
]
},
{
@@ -190,24 +188,19 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_image_with_cat(data_2d, catalog, vmin, vmax, flux_limit=None, title=None):\n",
+ "def create_image_with_cat(data_2d, catalog, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with sources overlaid.\n",
" '''\n",
+ " flux_limit = 1e-7\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
+ " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=0, vmax=0.1)\n",
" \n",
" for row in catalog:\n",
- " if flux_limit:\n",
- " if np.isnan(row['aper_total_flux']):\n",
- " pass\n",
- " else:\n",
- " if row['aper_total_flux'] > flux_limit:\n",
- " plt.plot(row['xcentroid'], row['ycentroid'], marker='x', markersize='3', color='red')\n",
- " else:\n",
- " plt.plot(row['xcentroid'], row['ycentroid'], marker='x', markersize='3', color='red')\n",
+ " if row['aper_total_flux'] > flux_limit:\n",
+ " plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -263,36 +256,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use calibrated NIRCam simulated imaging and wide field slitless spectroscopy (WFSS) exposures that are stored in Box. Let's grab the data:"
@@ -306,17 +270,26 @@
},
"outputs": [],
"source": [
- "# # For the catalog file:\n",
+ "# For the catalog file:\n",
"catalog_file_link = 'https://stsci.box.com/shared/static/272qsdpoax1cchy0gox96mobjpol780n.ecsv'\n",
"output_catalog = download_file(catalog_file_link)\n",
"\n",
"# For the NIRCam combined 2D image:\n",
"combined_i2d_file_link = 'https://stsci.box.com/shared/static/1xjdi28u5o1lmkmau0wuojdyv8fnre5n.fits'\n",
- "combined_i2d_file = download_file(combined_i2d_file_link)\n",
+ "combined_i2d = download_file(combined_i2d_file_link)\n",
+ "combined_i2d_file = \"example_nircam_imaging_i2d.fits\"\n",
"\n",
"# For the NIRCam WFSS 1D file:\n",
"final_c1d_file_link = 'https://stsci.box.com/shared/static/ixfnu50ju78vs40dcec8i7w0u6kwtoli.fits'\n",
- "final_c1d_file = download_file(final_c1d_file_link)"
+ "final_c1d = download_file(final_c1d_file_link)\n",
+ "final_c1d_file = \"example_nircam_wfss_c1d.fits\"\n",
+ "\n",
+ "# Save the files so that we can use them later\n",
+ "with fits.open(combined_i2d, ignore_missing_end=True) as f:\n",
+ " f.writeto(combined_i2d_file, overwrite=True)\n",
+ " \n",
+ "with fits.open(final_c1d, ignore_missing_end=True) as f:\n",
+ " f.writeto(final_c1d_file, overwrite=True) "
]
},
{
@@ -330,22 +303,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Data products: stage 3 (combined, rectified exposures)\n",
+ "3.-Data products: stage 3 (combined, rectified exposures)\n",
"------------------\n",
"\n",
"Stage 3 processing includes routines that work with multiple associated exposures to produce some kind of combined (mosaicked), rectified (aligned in a common output frame) product. There are unique pipeline modules for imaging, spectroscopic, coronagraphic, AMI, and TSO observations, and each produces specific outputs for the particular observing mode. The exposure level products are updated at this stage to provide the highest quality data products that include the results of ensemble processing (e.g., updated WCS, matching backgrounds, and a second pass at outlier detection). These products are available in MAST, along with the unrectified 2D and rectified 2D products. More information can be found in the [JWST User Documentation](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
- "We'll start by going through the various inputs and outputs for the mode-specific pipeline modules, and finish by revisiting our simulated data used in the previous notebooks. "
+ "We'll start by going through the various inputs and outputs for the imaging and spectroscopic pipeline modules, and finish by revisiting our simulated data used in the previous notebooks. If there is time, we can check out the data products for other observing modes in the [Bonus](#modes) section. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 4.1.-Imaging\n",
+ "## 3.1.-Imaging\n",
"\n",
"Stage 3 processing for direct imaging observations combines the calibrated data from multiple exposures (e.g., dithers or mosaics) into a single, rectified, distortion corrected product. Before being combined, the exposures receive additional corrections for astrometric alignment, background matching, and outlier detection. Coronagraphic imaging and time series imaging have their own separate pipeline modules. \n",
"\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_image3.html#inputs) for detailed information.\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -380,10 +355,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 4.2.-Spectroscopy\n",
+ "## 3.2.-Spectroscopy\n",
"\n",
"Stage 3 processing for spectroscopic observations combines the calibrated data from multiple exposures (e.g., dithers or nods) into a single combined 2D or 3D spectral data product and a combined 1D spectrum. Before being combined, exposures may receive additional corrections for background matching and subtraction, and outlier rejection. Time series data will go through the separate time series pipeline module. \n",
"\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_spec3.html#inputs) for detailed information.\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -429,136 +406,6 @@
" * **Description**: For NIRCam and NIRISS WFSS, and NIRISS SOSS, the 1D spectral combination step combines multiple 1D spectra for a given source into a final spectrum. "
]
},
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.3.-Aperture Masking Interferometry (AMI)\n",
- "\n",
- "Stage 3 processing for calibrated NIRISS AMI observations computes fringe parameters for individual exposures, averages the fringe results from multiple exposures, and, optionally, corrects science target fringe parameters using the fringe results from reference PSF targets.\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **2D calibrated images**\n",
- " * **Data model**: ImageModel\n",
- " * **File suffix**: ```_cal```\n",
- " * **Description**: Inputs need to be in the form of an association file that lists multiple science target exposures, and, optionally, reference PSF exposures. Individual exposures must be in the form of calibrated (```_cal```) data products from Stage 2 processing. \n",
- " \n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **Fringe parameter tables**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_ami```\n",
- " * **Description**: For every input exposure, fringe parameters and closure phases caculated by the ```ami_analyze``` step are saved to a FITS table containing the fringe parameters and closure phases.\n",
- "\n",
- "* (optional) **Averaged fringe parameters table**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_amiavg``` or ```_psf-amiavg```\n",
- " * **Description**: If multiple target or reference PSF exposures are used as input and the ```–save_averages``` parameter is set to True, the ```ami_average``` step will save averaged results for the target in an ```_amiavg``` product and for the reference PSF in a ```_psf-amiavg``` product. \n",
- " \n",
- "* **Normalized fringe parameters table**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_aminorm```\n",
- " * **Description**: If reference PSF exposures are included in the input association, the averaged AMI results for the target will be normalized by the averaged AMI results for the reference PSF and will be saved to an ```_aminorm``` product file. This file has the same FITS table format as the ```_ami``` products. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.4.-Coronagraphy\n",
- "\n",
- "Stage 3 coronagraphic processing is applied to associations of calibrated NIRCam coronagraphic and MIRI Lyot and 4QPM exposures, and is used to produce PSF-subtracted, resampled, combined images of the source object.\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **3D calibrated images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_calints```\n",
- " * **Description**: The input to this stage must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D calibrated (```_calints```) data products from Stage 2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. \n",
- " \n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **CR-flagged images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_crfints```\n",
- " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
- "\n",
- "* **3D stacked PSF images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_psfstack```\n",
- " * **Description**: The data from each input PSF reference exposure are concatenated into a single combined 3D stack for use by subsequent steps. The stacked PSF data are written to a ```_psfstack``` product. \n",
- " \n",
- "* **4D aligned PSF images**\n",
- " * **Data model**: QuadModel\n",
- " * **File suffix**: ```_psfalign```\n",
- " * **Description**: For each science target exposure, all of the reference PSF images in the ```_psfstack``` product are aligned to each science target integration and saved to a 4D ```_psfalign``` product. \n",
- " \n",
- "* **3D PSF-subtracted images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_psfsub```\n",
- " * **Description**: For each science target exposure, the ```klip``` step applies PSF fitting and subtraction for each integration, resulting in a 3D stack of PSF-subtracted images. \n",
- " \n",
- "* **2D resampled image**\n",
- " * **Data model**: DrizProductModel\n",
- " * **File suffix**: ```_i2d```\n",
- " * **Description**: The ```resample``` step is applied to the CR-flagged products to create a single resampled and combined product for the science target. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.5.-Time Series Observation (TSO)\n",
- "\n",
- "Stage 3 TSO processing is applied to associations of calibrated TSO exposures (e.g. NIRCam TS imaging, NIRCam TS grism, NIRISS SOSS, NIRSpec BrightObj, MIRI LRS Slitless) and is used to produce calibrated time-series photometry or spectra of the source object.\n",
- "\n",
- "\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **3D calibrated images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_calints```\n",
- " * **Description**: The input is in the form of an association file listing multiple exposures or exposure segments of a science target. Individual inputs should be in the form of 3D calibrated (```_calints```) products from Stage 2 (either imaging or spectroscopic) processing. These products contain 3D stacks of per-integration images, and each pipeline step will loop over all of the integrations in each input. Many TSO exposures may contain a large number of integrations that make their individual exposure products too large (in terms of file size on disk) to be able to handle conveniently. In these cases, the uncalibrated raw data (```_uncal```) for a given exposure are split into multiple “segmented” products, each of which is identified with a segment number (see [segmented products](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html#segmented-files)). The input association file includes all ```_calints``` exposure segments. The ```outlier_detection``` step processes a single segment at a time, creating one output ```_crfints``` product per segment. The remaining steps will process each segment and concatenate the results into a single output product that contains results for all exposures and segments listed in the association.\n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **CR-flagged images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_crfints```\n",
- " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
- "\n",
- "* **Imaging photometry catalog**\n",
- " * **Data model**: N/A\n",
- " * **File suffix**: ```_phot```\n",
- " * **Description**: For imaging TS observations, a source catalog containing photometry results from all of the ```_crfints``` products is produced, organized as a function of integration time stamps.\n",
- " \n",
- "* **1D extracted spectral data**\n",
- " * **Data model**: MultiSpecModel\n",
- " * **File suffix**: ```_x1dints```\n",
- " * **Description**: For spectroscopic TS observations, the 1D spectral extraction step is applied to all ```_crfints``` products to create a single ```_x1dints``` product containing 1D extracted spectral data for all integrations contained in the input exposures. \n",
- " \n",
- "* **Spectroscopic white-light catalog**\n",
- " * **Data model**: N/A\n",
- " * **File suffix**: ```_whtlt```\n",
- " * **Description**: For spectroscopic TS observations, the ```white_light``` step is applied to all of the 1D extracted spectral data in the ```_x1dints``` product to produce an ASCII catalog in ```ecsv``` format containing the wavelength-integrated white-light photometry of the source. The catalog lists the integrated white-light flux as a function of time, based on the integration time stamps. "
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -570,7 +417,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "5.-Examining the products\n",
+ "4.-Examining the products\n",
"------------------\n",
"\n",
"Whew! That was a lot of information. Hopefully by now you're catching on to the pattern -- \n",
@@ -613,9 +460,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.1.-Catalogs\n",
+ "## 4.1.-Catalogs\n",
"\n",
- "Here, we'll focus on the catalog output from Stage 3 image processing, but other catalogs will have a similar ASCII format and file name (```.ecsv```). You can read more about the ```ecsv``` format [in the Astropy documentation here](https://docs.astropy.org/en/stable/io/ascii/write.html#ecsv-format). In short, it provides a convenient way to handle tables and associated metadata.\n",
+ "Here, we'll focus on the catalog output from Stage 3 image processing (which combines images and generates catalogs), but other catalogs will have a similar ASCII format and file name (```.ecsv```). You can read more about the ```ecsv``` format [in the Astropy documentation here](https://docs.astropy.org/en/stable/io/ascii/write.html#ecsv-format). In short, it provides a convenient way to handle tables and associated metadata.\n",
"\n",
"We can open the table using Astropy's ```Table```:"
]
@@ -652,7 +499,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# What's it look like?\n"
+ "# What does the catalog look like?\n"
]
},
{
@@ -663,7 +510,7 @@
},
"outputs": [],
"source": [
- "# What's included in the information?\n"
+ "# See all of the table information with .info\n"
]
},
{
@@ -679,7 +526,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Print the columns\n"
+ "# Print the columns (colnames)\n"
]
},
{
@@ -688,7 +535,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the meta data\n"
+ "# Get the meta data with .meta\n"
]
},
{
@@ -713,7 +560,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# For our flux column, get the column name, unit, data type, and data\n"
+ "# What are the units for this column?\n"
]
},
{
@@ -738,16 +585,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Show Dec in deg\n"
+ "# Now show the RA in degrees\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.2.-Combined image\n",
- "\n",
- "Here, we'll take a look at the final combined imaging data product from Stage 3, the ```_i2d``` file. Let's use the imaging simulation from our previous modules. "
+ "### A.-Exercise 1\n",
+ "Now, you try it!"
]
},
{
@@ -756,7 +602,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the final combined 2D image (use: image)\n"
+ "# For our flux column, get the column name and description (hint: look at the .)\n"
]
},
{
@@ -765,18 +611,25 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the model structure\n"
+ "# Now, print the data type, and data for our flux column (hint: dtype, data)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "# Check out the meta data\n"
+ "# Show the Dec column data in degrees\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4.2.-Combined image\n",
+ "\n",
+ "Here, we'll take a look at the final combined imaging data product from Stage 3, the ```_i2d``` file. Let's use the imaging simulation from our previous modules. "
]
},
{
@@ -785,7 +638,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# What's the shape?\n"
+ "# Load the final combined 2D image (use: image)\n"
]
},
{
@@ -794,14 +647,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the final product\n"
+ "# Check out the model structure with .info()\n"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "Let's overlay the catalog, as an example:"
+ "# What's the data shape?\n"
]
},
{
@@ -810,7 +665,14 @@
"metadata": {},
"outputs": [],
"source": [
- "# First, get a sigma-clipped mean of the flux to determine the scale for our image (use: clipped flux)\n"
+ "# Create an image of the final product\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's overlay the catalog, as an example:"
]
},
{
@@ -828,7 +690,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.3.-Combined spectrum\n",
+ "## 4.3.-Combined spectrum\n",
"\n",
"Here, we'll take a look at the final combined 1D spectrum from Stage 3 spectroscopic processing (```_c1d```. Let's use the WFSS simulation from our previous modules. "
]
@@ -839,7 +701,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the final combined 1D file (use: spectrum)\n"
+ "# Load the final combined 1D file into a CombinedSpecModel (use: spectrum)\n"
]
},
{
@@ -850,29 +712,53 @@
},
"outputs": [],
"source": [
- "# Look at the model structure\n"
+ "# Look at the model structure with .info()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
- "scrolled": true
+ "scrolled": false
},
"outputs": [],
"source": [
- "# Look at the metadata\n"
+ "# Show the table values \n"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Plot the spectrum with a median filter of 11\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### A.-Exercise 2\n",
+ "Now, you try it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Where do I find the spectral order for my combined spectrum?\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
- "# Show me the table values \n"
+ "# Where is the WCS information for the CombinedSpec model? \n"
]
},
{
@@ -881,7 +767,158 @@
"metadata": {},
"outputs": [],
"source": [
- "# Plot it up!\n"
+ "# What columns are in the spec_table? \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "5.-Bonus: Other observing modes\n",
+ "------------------\n",
+ "\n",
+ "Below are descriptions of the data products for other JWST observing modes that we may not have time to cover in this JWebbinar. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.1.-Aperture Masking Interferometry (AMI)\n",
+ "\n",
+ "Stage 3 processing for calibrated NIRISS AMI observations computes fringe parameters for individual exposures, averages the fringe results from multiple exposures, and, optionally, corrects science target fringe parameters using the fringe results from reference PSF targets.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_ami3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **2D calibrated images**\n",
+ " * **Data model**: ImageModel\n",
+ " * **File suffix**: ```_cal```\n",
+ " * **Description**: Inputs need to be in the form of an association file that lists multiple science target exposures, and, optionally, reference PSF exposures. Individual exposures must be in the form of calibrated (```_cal```) data products from Stage 2 processing. \n",
+ " \n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **Fringe parameter tables**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_ami```\n",
+ " * **Description**: For every input exposure, fringe parameters and closure phases caculated by the ```ami_analyze``` step are saved to a FITS table containing the fringe parameters and closure phases.\n",
+ "\n",
+ "* (optional) **Averaged fringe parameters table**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_amiavg``` or ```_psf-amiavg```\n",
+ " * **Description**: If multiple target or reference PSF exposures are used as input and the ```–save_averages``` parameter is set to True, the ```ami_average``` step will save averaged results for the target in an ```_amiavg``` product and for the reference PSF in a ```_psf-amiavg``` product. \n",
+ " \n",
+ "* **Normalized fringe parameters table**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_aminorm```\n",
+ " * **Description**: If reference PSF exposures are included in the input association, the averaged AMI results for the target will be normalized by the averaged AMI results for the reference PSF and will be saved to an ```_aminorm``` product file. This file has the same FITS table format as the ```_ami``` products. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.2.-Coronagraphy\n",
+ "\n",
+ "Stage 3 coronagraphic processing is applied to associations of calibrated NIRCam coronagraphic and MIRI Lyot and 4QPM exposures, and is used to produce PSF-subtracted, resampled, combined images of the source object.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_coron3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **3D calibrated images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_calints```\n",
+ " * **Description**: The input to this stage must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D calibrated (```_calints```) data products from Stage 2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. \n",
+ " \n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **CR-flagged images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_crfints```\n",
+ " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
+ "\n",
+ "* **3D stacked PSF images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_psfstack```\n",
+ " * **Description**: The data from each input PSF reference exposure are concatenated into a single combined 3D stack for use by subsequent steps. The stacked PSF data are written to a ```_psfstack``` product. \n",
+ " \n",
+ "* **4D aligned PSF images**\n",
+ " * **Data model**: QuadModel\n",
+ " * **File suffix**: ```_psfalign```\n",
+ " * **Description**: For each science target exposure, all of the reference PSF images in the ```_psfstack``` product are aligned to each science target integration and saved to a 4D ```_psfalign``` product. \n",
+ " \n",
+ "* **3D PSF-subtracted images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_psfsub```\n",
+ " * **Description**: For each science target exposure, the ```klip``` step applies PSF fitting and subtraction for each integration, resulting in a 3D stack of PSF-subtracted images. \n",
+ " \n",
+ "* **2D resampled image**\n",
+ " * **Data model**: DrizProductModel\n",
+ " * **File suffix**: ```_i2d```\n",
+ " * **Description**: The ```resample``` step is applied to the CR-flagged products to create a single resampled and combined product for the science target. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.3.-Time Series Observation (TSO)\n",
+ "\n",
+ "Stage 3 TSO processing is applied to associations of calibrated TSO exposures (e.g. NIRCam TS imaging, NIRCam TS grism, NIRISS SOSS, NIRSpec BrightObj, MIRI LRS Slitless) and is used to produce calibrated time-series photometry or spectra of the source object.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_tso3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **3D calibrated images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_calints```\n",
+ " * **Description**: The input is in the form of an association file listing multiple exposures or exposure segments of a science target. Individual inputs should be in the form of 3D calibrated (```_calints```) products from Stage 2 (either imaging or spectroscopic) processing. These products contain 3D stacks of per-integration images, and each pipeline step will loop over all of the integrations in each input. Many TSO exposures may contain a large number of integrations that make their individual exposure products too large (in terms of file size on disk) to be able to handle conveniently. In these cases, the uncalibrated raw data (```_uncal```) for a given exposure are split into multiple “segmented” products, each of which is identified with a segment number (see [segmented products](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html#segmented-files)). The input association file includes all ```_calints``` exposure segments. The ```outlier_detection``` step processes a single segment at a time, creating one output ```_crfints``` product per segment. The remaining steps will process each segment and concatenate the results into a single output product that contains results for all exposures and segments listed in the association.\n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **CR-flagged images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_crfints```\n",
+ " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
+ "\n",
+ "* **Imaging photometry catalog**\n",
+ " * **Data model**: N/A\n",
+ " * **File suffix**: ```_phot```\n",
+ " * **Description**: For imaging TS observations, a source catalog containing photometry results from all of the ```_crfints``` products is produced, organized as a function of integration time stamps.\n",
+ " \n",
+ "* **1D extracted spectral data**\n",
+ " * **Data model**: MultiSpecModel\n",
+ " * **File suffix**: ```_x1dints```\n",
+ " * **Description**: For spectroscopic TS observations, the 1D spectral extraction step is applied to all ```_crfints``` products to create a single ```_x1dints``` product containing 1D extracted spectral data for all integrations contained in the input exposures. \n",
+ " \n",
+ "* **Spectroscopic white-light catalog**\n",
+ " * **Data model**: N/A\n",
+ " * **File suffix**: ```_whtlt```\n",
+ " * **Description**: For spectroscopic TS observations, the ```white_light``` step is applied to all of the 1D extracted spectral data in the ```_x1dints``` product to produce an ASCII catalog in ```ecsv``` format containing the wavelength-integrated white-light photometry of the source. The catalog lists the integrated white-light flux as a function of time, based on the integration time stamps. "
]
},
{
@@ -912,10 +949,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "7.-Exercise\n",
+ "7.-Exercise solutions\n",
"------------------\n",
"\n",
- "Now you try it!"
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 1"
]
},
{
@@ -924,7 +968,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many sources are in the source catalog? \n"
+ "# For our flux column, get the column name and description (hint: look at the .)\n",
+ "total_flux.name, total_flux.description"
]
},
{
@@ -933,7 +978,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many sources are identified by the pipeline as stars? \n"
+ "# Now, print the data type, and data for our flux column (hint: dtype, data)\n",
+ "total_flux.dtype, total_flux.data"
]
},
{
@@ -942,7 +988,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Where can you find information about the aperture corrections used for the catalog?\n"
+ "# Show the Dec column data in degrees\n",
+ "image_catalog['sky_centroid'].dec.deg"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
@@ -951,7 +1005,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Where do I find the spectral order for my combined spectrum?\n"
+ "# Where do I find the spectral order for my combined spectrum?\n",
+ "spectrum.spectral_order"
]
},
{
@@ -962,7 +1017,8 @@
},
"outputs": [],
"source": [
- "# Where is the WCS information for the CombinedSpec model? \n"
+ "# Where is the WCS information for the CombinedSpec model? \n",
+ "spectrum.meta.wcs"
]
},
{
@@ -973,7 +1029,9 @@
},
"outputs": [],
"source": [
- "# What columns are in the spec_table? \n"
+ "# What columns are in the spec_table? \n",
+ "astrotab = Table(spectrum.spec_table).colnames\n",
+ "astrotab"
]
},
{
diff --git a/pipeline_products_session/jwst-data-products-part3-static.ipynb b/pipeline_products_session/jwst-data-products-part3-solutions.ipynb
similarity index 83%
rename from pipeline_products_session/jwst-data-products-part3-static.ipynb
rename to pipeline_products_session/jwst-data-products-part3-solutions.ipynb
index ed0bb33..5c3f959 100644
--- a/pipeline_products_session/jwst-data-products-part3-static.ipynb
+++ b/pipeline_products_session/jwst-data-products-part3-solutions.ipynb
@@ -7,35 +7,52 @@
"\n",
"# JWST Data Products: Ensemble Processing Products\n",
"--------------------------------------------------------------\n",
- "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: March 24, 2021.\n",
+ "**Author**: Alicia Canipe (acanipe@stsci.edu) | **Latest update**: April 26, 2021.\n",
"\n",
+ "\n",
+ "
Notebook Goals
\n",
+ "
Using the final data products from the pipeline, we will:
\n",
+ "
\n",
+ " - Take a look at a JWST source catalog
\n",
+ " - Examine our final mosaicked image
\n",
+ " - Look at a our final spectral data product
\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
"## Table of contents\n",
"1. [Introduction](#intro)\n",
" 1. [Resources](#resources) \n",
- "2. [Data in MAST](#mast)\n",
- "3. [Example data for this exercise](#example)\n",
- "4. [Data products: stage 3 (combined, rectified exposures)](#stage3)\n",
+ "2. [Example data for this exercise](#example)\n",
+ "3. [Data products: stage 3 (combined, rectified exposures)](#stage3)\n",
" 1. [Imaging](#s3-imaging)\n",
" 1. [Input](#s3-imaging-input)\n",
" 2. [Output](#s3-imaging-output)\n",
" 2. [Spectroscopy](#s3-spectroscopy)\n",
" 1. [Input](#s3-spectroscopy-input)\n",
" 2. [Output](#s3-spectroscopy-output)\n",
- " 3. [Aperture Masking Interferometry (AMI)](#s3-ami)\n",
+ "4. [Examining the products](#examine)\n",
+ " 1. [Catalogs](#catalogs)\n",
+ " 1. [Exercise 1](#exercise-1)\n",
+ " 2. [Combined image](#comb-image)\n",
+ " 3. [Combined spectrum](#comb-spec)\n",
+ " 1. [Exercise 2](#exercise-2) \n",
+ "5. [Bonus: Other observing modes](#modes)\n",
+ " 1. [Aperture Masking Interferometry (AMI)](#s3-ami)\n",
" 1. [Input](#s3-ami-input)\n",
" 2. [Output](#s3-ami-output)\n",
- " 4. [Coronagraphy](#s3-coronagraphy)\n",
+ " 2. [Coronagraphy](#s3-coronagraphy)\n",
" 1. [Input](#s3-coronagraphy-input)\n",
" 2. [Output](#s3-coronagraphy-output)\n",
- " 5. [Time Series Observation (TSO)](#s3-tso)\n",
+ " 3. [Time Series Observation (TSO)](#s3-tso)\n",
" 1. [Input](#s3-tso-input)\n",
" 2. [Output](#s3-tso-output)\n",
- "5. [Examining the products](#examine)\n",
- " 1. [Catalogs](#catalogs)\n",
- " 2. [Combined image](#comb-image)\n",
- " 3. [Combined spectrum](#comb-spec)\n",
"6. [The end](#bye-bye)\n",
- "7. [Exercise](#exercise)"
+ "7. [Exercise solutions](#solutions)"
]
},
{
@@ -50,11 +67,26 @@
"### A.-Resources\n",
"\n",
"\n",
- "Visit the [webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars) to find resources for:\n",
- "* The Mikulski Archive for Space Telescopes (MAST) \n",
- "* JWST Documentation (JDox) for JWST data products\n",
- "* The most up-to-date information about JWST data products in the pipeline readthedocs\n",
- "* Pipeline roadmaps for when to recalibrate your data\n",
+ "* [STScI Webpage for JWebbinars](https://www.stsci.edu/jwst/science-execution/jwebbinars)\n",
+ "* [The Mikulski Archive for Space Telescopes (MAST)](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)\n",
+ "* [JWST Documentation (JDox) for JWST data products](https://jwst-docs.stsci.edu/obtaining-data)\n",
+ "* [The most up-to-date information about JWST data products in the pipeline readthedocs](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/index.html)\n",
+ "\n",
+ "### B.-Data in MAST\n",
+ "\n",
+ "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
+ "\n",
+ "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
+ "\n",
+ "Standard science data files include:\n",
+ "\n",
+ "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
+ "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
+ "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal``` or ```calints```\n",
+ "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
+ "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
+ "\n",
+ "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. \n",
"\n",
"Before we begin, import the libraries used in this notebook:"
]
@@ -130,48 +162,14 @@
"metadata": {},
"outputs": [],
"source": [
- "def download_file(url):\n",
- " \"\"\"Download into the current working directory the\n",
- " file from Box given the direct URL\n",
- " \n",
- " Parameters\n",
- " ----------\n",
- " url : str\n",
- " URL to the file to be downloaded\n",
- " \n",
- " Returns\n",
- " -------\n",
- " download_filename : str\n",
- " Name of the downloaded file\n",
- " \"\"\"\n",
- " response = requests.get(url, stream=True)\n",
- " if response.status_code != 200:\n",
- " raise RuntimeError(\"Wrong URL - {}\".format(url))\n",
- " download_filename = response.headers['Content-Disposition'].split('\"')[1]\n",
- " with open(download_filename, 'wb') as f:\n",
- " for chunk in response.iter_content(chunk_size=1024):\n",
- " if chunk:\n",
- " f.write(chunk)\n",
- " return download_filename"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def create_image(data_2d, vmin, vmax, xpixel=None, ypixel=None, title=None):\n",
+ "def create_image(data_2d, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with an option to highlight a specific pixel.\n",
" '''\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
- " \n",
- " if xpixel and ypixel:\n",
- " plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')\n",
+ " plt.imshow(data_2d.data, origin='lower', cmap='gray', vmin=0, vmax=0.1)\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -181,7 +179,7 @@
"\n",
" fig.tight_layout()\n",
" plt.subplots_adjust(left=0.15)\n",
- " plt.colorbar(label='MJy/sr')"
+ " plt.colorbar(label=data_2d.meta.bunit_data)"
]
},
{
@@ -190,24 +188,19 @@
"metadata": {},
"outputs": [],
"source": [
- "def create_image_with_cat(data_2d, catalog, vmin, vmax, flux_limit=None, title=None):\n",
+ "def create_image_with_cat(data_2d, catalog, title=None):\n",
" ''' Function to generate a 2D image of the data, \n",
" with sources overlaid.\n",
" '''\n",
+ " flux_limit = 1e-7\n",
" \n",
" fig = plt.figure(figsize=(8, 8))\n",
" ax = plt.subplot()\n",
- " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=vmin, vmax=vmax)\n",
+ " plt.imshow(data_2d, origin='lower', cmap='gray', vmin=0, vmax=0.1)\n",
" \n",
" for row in catalog:\n",
- " if flux_limit:\n",
- " if np.isnan(row['aper_total_flux']):\n",
- " pass\n",
- " else:\n",
- " if row['aper_total_flux'] > flux_limit:\n",
- " plt.plot(row['xcentroid'], row['ycentroid'], marker='x', markersize='3', color='red')\n",
- " else:\n",
- " plt.plot(row['xcentroid'], row['ycentroid'], marker='x', markersize='3', color='red')\n",
+ " if row['aper_total_flux'] > flux_limit:\n",
+ " plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')\n",
"\n",
" plt.xlabel('Pixel column')\n",
" plt.ylabel('Pixel row')\n",
@@ -263,36 +256,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "2.-Data in MAST \n",
- "------------------\n",
- "\n",
- "The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files generated by the pipeline. The exact type and number of products depends on the instrument, its configuration, and observing mode. Observers should consult the [MAST documentation for information about standard data products](https://jwst-docs.stsci.edu/obtaining-data/data-discovery#DataDiscovery-Dataproducttypes). \n",
- "\n",
- "Of the many different data products produced by the calibration pipeline, most observers will find the science data files in MAST to be sufficient for their analysis. However, other data products such as guide star data, associations, and engineering data are also available. \n",
- "\n",
- "Standard science data files include:\n",
- "\n",
- "* [uncalibrated raw data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#uncalibrated-raw-data-uncal), identified by the suffix ```uncal```\n",
- "* [countrate data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#countrate-data-rate-and-rateints) produced by applying the Stage 1 (detector-level) corrections in order to compute count rates from the original accumulating signal ramps, identified by the suffix ```rate``` or ```rateints```\n",
- "* [calibrated single exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#calibrated-data-cal-and-calints), identified by the suffix ```cal```\n",
- "* [resampled and/or combined exposures](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#resampled-2-d-data-i2d-and-s2d), identified by the suffixes ```i2d``` or ```s2d```\n",
- "* [extracted spectroscopic 1D data](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#extracted-1-d-spectroscopic-data-x1d-and-x1dints), identified by the suffixes ```x1d``` or ```c1d```\n",
- "\n",
- "In addition, there are also [several other products depending on the observing mode](https://jwst-pipeline.readthedocs.io/en/stable/jwst/data_products/science_products.html#source-catalog-cat), such as source and photometry catalogs, stacked PSF data, and NIRISS AMI derived data. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "[Top of Page](#title_ID)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "3.-Example data for this exercise \n",
+ "2.-Example data for this exercise \n",
"------------------\n",
"\n",
"For this module, we will use calibrated NIRCam simulated imaging and wide field slitless spectroscopy (WFSS) exposures that are stored in Box. Let's grab the data:"
@@ -306,17 +270,26 @@
},
"outputs": [],
"source": [
- "# # For the catalog file:\n",
+ "# For the catalog file:\n",
"catalog_file_link = 'https://stsci.box.com/shared/static/272qsdpoax1cchy0gox96mobjpol780n.ecsv'\n",
"output_catalog = download_file(catalog_file_link)\n",
"\n",
"# For the NIRCam combined 2D image:\n",
"combined_i2d_file_link = 'https://stsci.box.com/shared/static/1xjdi28u5o1lmkmau0wuojdyv8fnre5n.fits'\n",
- "combined_i2d_file = download_file(combined_i2d_file_link)\n",
+ "combined_i2d = download_file(combined_i2d_file_link)\n",
+ "combined_i2d_file = \"example_nircam_imaging_i2d.fits\"\n",
"\n",
"# For the NIRCam WFSS 1D file:\n",
"final_c1d_file_link = 'https://stsci.box.com/shared/static/ixfnu50ju78vs40dcec8i7w0u6kwtoli.fits'\n",
- "final_c1d_file = download_file(final_c1d_file_link)"
+ "final_c1d = download_file(final_c1d_file_link)\n",
+ "final_c1d_file = \"example_nircam_wfss_c1d.fits\"\n",
+ "\n",
+ "# Save the files so that we can use them later\n",
+ "with fits.open(combined_i2d, ignore_missing_end=True) as f:\n",
+ " f.writeto(combined_i2d_file, overwrite=True)\n",
+ " \n",
+ "with fits.open(final_c1d, ignore_missing_end=True) as f:\n",
+ " f.writeto(final_c1d_file, overwrite=True) "
]
},
{
@@ -330,22 +303,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4.-Data products: stage 3 (combined, rectified exposures)\n",
+ "3.-Data products: stage 3 (combined, rectified exposures)\n",
"------------------\n",
"\n",
"Stage 3 processing includes routines that work with multiple associated exposures to produce some kind of combined (mosaicked), rectified (aligned in a common output frame) product. There are unique pipeline modules for imaging, spectroscopic, coronagraphic, AMI, and TSO observations, and each produces specific outputs for the particular observing mode. The exposure level products are updated at this stage to provide the highest quality data products that include the results of ensemble processing (e.g., updated WCS, matching backgrounds, and a second pass at outlier detection). These products are available in MAST, along with the unrectified 2D and rectified 2D products. More information can be found in the [JWST User Documentation](https://jwst-docs.stsci.edu/jwst-data-reduction-pipeline/algorithm-documentation/stages-of-processing). We also have a full list of data product types and the units of the data for each product [in the documentation](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/product_types.html#data-product-types). \n",
"\n",
- "We'll start by going through the various inputs and outputs for the mode-specific pipeline modules, and finish by revisiting our simulated data used in the previous notebooks. "
+ "We'll start by going through the various inputs and outputs for the imaging and spectroscopic pipeline modules, and finish by revisiting our simulated data used in the previous notebooks. If there is time, we can check out the data products for other observing modes in the [Bonus](#modes) section. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 4.1.-Imaging\n",
+ "## 3.1.-Imaging\n",
"\n",
"Stage 3 processing for direct imaging observations combines the calibrated data from multiple exposures (e.g., dithers or mosaics) into a single, rectified, distortion corrected product. Before being combined, the exposures receive additional corrections for astrometric alignment, background matching, and outlier detection. Coronagraphic imaging and time series imaging have their own separate pipeline modules. \n",
"\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_image3.html#inputs) for detailed information.\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -380,10 +355,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 4.2.-Spectroscopy\n",
+ "## 3.2.-Spectroscopy\n",
"\n",
"Stage 3 processing for spectroscopic observations combines the calibrated data from multiple exposures (e.g., dithers or nods) into a single combined 2D or 3D spectral data product and a combined 1D spectrum. Before being combined, exposures may receive additional corrections for background matching and subtraction, and outlier rejection. Time series data will go through the separate time series pipeline module. \n",
"\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_spec3.html#inputs) for detailed information.\n",
+ "\n",
"### A.-Input\n",
"\n",
"The inputs to this stage are listed below.\n",
@@ -429,136 +406,6 @@
" * **Description**: For NIRCam and NIRISS WFSS, and NIRISS SOSS, the 1D spectral combination step combines multiple 1D spectra for a given source into a final spectrum. "
]
},
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.3.-Aperture Masking Interferometry (AMI)\n",
- "\n",
- "Stage 3 processing for calibrated NIRISS AMI observations computes fringe parameters for individual exposures, averages the fringe results from multiple exposures, and, optionally, corrects science target fringe parameters using the fringe results from reference PSF targets.\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **2D calibrated images**\n",
- " * **Data model**: ImageModel\n",
- " * **File suffix**: ```_cal```\n",
- " * **Description**: Inputs need to be in the form of an association file that lists multiple science target exposures, and, optionally, reference PSF exposures. Individual exposures must be in the form of calibrated (```_cal```) data products from Stage 2 processing. \n",
- " \n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **Fringe parameter tables**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_ami```\n",
- " * **Description**: For every input exposure, fringe parameters and closure phases caculated by the ```ami_analyze``` step are saved to a FITS table containing the fringe parameters and closure phases.\n",
- "\n",
- "* (optional) **Averaged fringe parameters table**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_amiavg``` or ```_psf-amiavg```\n",
- " * **Description**: If multiple target or reference PSF exposures are used as input and the ```–save_averages``` parameter is set to True, the ```ami_average``` step will save averaged results for the target in an ```_amiavg``` product and for the reference PSF in a ```_psf-amiavg``` product. \n",
- " \n",
- "* **Normalized fringe parameters table**\n",
- " * **Data model**: AmiLgModel\n",
- " * **File suffix**: ```_aminorm```\n",
- " * **Description**: If reference PSF exposures are included in the input association, the averaged AMI results for the target will be normalized by the averaged AMI results for the reference PSF and will be saved to an ```_aminorm``` product file. This file has the same FITS table format as the ```_ami``` products. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.4.-Coronagraphy\n",
- "\n",
- "Stage 3 coronagraphic processing is applied to associations of calibrated NIRCam coronagraphic and MIRI Lyot and 4QPM exposures, and is used to produce PSF-subtracted, resampled, combined images of the source object.\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **3D calibrated images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_calints```\n",
- " * **Description**: The input to this stage must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D calibrated (```_calints```) data products from Stage 2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. \n",
- " \n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **CR-flagged images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_crfints```\n",
- " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
- "\n",
- "* **3D stacked PSF images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_psfstack```\n",
- " * **Description**: The data from each input PSF reference exposure are concatenated into a single combined 3D stack for use by subsequent steps. The stacked PSF data are written to a ```_psfstack``` product. \n",
- " \n",
- "* **4D aligned PSF images**\n",
- " * **Data model**: QuadModel\n",
- " * **File suffix**: ```_psfalign```\n",
- " * **Description**: For each science target exposure, all of the reference PSF images in the ```_psfstack``` product are aligned to each science target integration and saved to a 4D ```_psfalign``` product. \n",
- " \n",
- "* **3D PSF-subtracted images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_psfsub```\n",
- " * **Description**: For each science target exposure, the ```klip``` step applies PSF fitting and subtraction for each integration, resulting in a 3D stack of PSF-subtracted images. \n",
- " \n",
- "* **2D resampled image**\n",
- " * **Data model**: DrizProductModel\n",
- " * **File suffix**: ```_i2d```\n",
- " * **Description**: The ```resample``` step is applied to the CR-flagged products to create a single resampled and combined product for the science target. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4.5.-Time Series Observation (TSO)\n",
- "\n",
- "Stage 3 TSO processing is applied to associations of calibrated TSO exposures (e.g. NIRCam TS imaging, NIRCam TS grism, NIRISS SOSS, NIRSpec BrightObj, MIRI LRS Slitless) and is used to produce calibrated time-series photometry or spectra of the source object.\n",
- "\n",
- "\n",
- "\n",
- "### A.-Input\n",
- "\n",
- "The inputs to this stage are listed below.\n",
- "\n",
- "* **3D calibrated images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_calints```\n",
- " * **Description**: The input is in the form of an association file listing multiple exposures or exposure segments of a science target. Individual inputs should be in the form of 3D calibrated (```_calints```) products from Stage 2 (either imaging or spectroscopic) processing. These products contain 3D stacks of per-integration images, and each pipeline step will loop over all of the integrations in each input. Many TSO exposures may contain a large number of integrations that make their individual exposure products too large (in terms of file size on disk) to be able to handle conveniently. In these cases, the uncalibrated raw data (```_uncal```) for a given exposure are split into multiple “segmented” products, each of which is identified with a segment number (see [segmented products](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html#segmented-files)). The input association file includes all ```_calints``` exposure segments. The ```outlier_detection``` step processes a single segment at a time, creating one output ```_crfints``` product per segment. The remaining steps will process each segment and concatenate the results into a single output product that contains results for all exposures and segments listed in the association.\n",
- "\n",
- "### B.-Output\n",
- "\n",
- "The outputs of this stage are listed below.\n",
- "\n",
- "* **CR-flagged images**\n",
- " * **Data model**: CubeModel\n",
- " * **File suffix**: ```_crfints```\n",
- " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
- "\n",
- "* **Imaging photometry catalog**\n",
- " * **Data model**: N/A\n",
- " * **File suffix**: ```_phot```\n",
- " * **Description**: For imaging TS observations, a source catalog containing photometry results from all of the ```_crfints``` products is produced, organized as a function of integration time stamps.\n",
- " \n",
- "* **1D extracted spectral data**\n",
- " * **Data model**: MultiSpecModel\n",
- " * **File suffix**: ```_x1dints```\n",
- " * **Description**: For spectroscopic TS observations, the 1D spectral extraction step is applied to all ```_crfints``` products to create a single ```_x1dints``` product containing 1D extracted spectral data for all integrations contained in the input exposures. \n",
- " \n",
- "* **Spectroscopic white-light catalog**\n",
- " * **Data model**: N/A\n",
- " * **File suffix**: ```_whtlt```\n",
- " * **Description**: For spectroscopic TS observations, the ```white_light``` step is applied to all of the 1D extracted spectral data in the ```_x1dints``` product to produce an ASCII catalog in ```ecsv``` format containing the wavelength-integrated white-light photometry of the source. The catalog lists the integrated white-light flux as a function of time, based on the integration time stamps. "
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -570,7 +417,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "5.-Examining the products\n",
+ "4.-Examining the products\n",
"------------------\n",
"\n",
"Whew! That was a lot of information. Hopefully by now you're catching on to the pattern -- \n",
@@ -613,9 +460,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.1.-Catalogs\n",
+ "## 4.1.-Catalogs\n",
"\n",
- "Here, we'll focus on the catalog output from Stage 3 image processing, but other catalogs will have a similar ASCII format and file name (```.ecsv```). You can read more about the ```ecsv``` format [in the Astropy documentation here](https://docs.astropy.org/en/stable/io/ascii/write.html#ecsv-format). In short, it provides a convenient way to handle tables and associated metadata.\n",
+ "Here, we'll focus on the catalog output from Stage 3 image processing (which combines images and generates catalogs), but other catalogs will have a similar ASCII format and file name (```.ecsv```). You can read more about the ```ecsv``` format [in the Astropy documentation here](https://docs.astropy.org/en/stable/io/ascii/write.html#ecsv-format). In short, it provides a convenient way to handle tables and associated metadata.\n",
"\n",
"We can open the table using Astropy's ```Table```:"
]
@@ -627,7 +474,7 @@
"outputs": [],
"source": [
"# Load the catalog (use: image_catalog)\n",
- "image_catalog = Table.read(output_catalog)"
+ "image_catalog = Table.read(output_catalog, format='ascii.ecsv')"
]
},
{
@@ -653,7 +500,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# What's it look like?\n",
+ "# What does the catalog look like?\n",
"image_catalog"
]
},
@@ -665,7 +512,7 @@
},
"outputs": [],
"source": [
- "# What's included in the information?\n",
+ "# See all of the table information with .info\n",
"image_catalog.info"
]
},
@@ -682,7 +529,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Print the columns\n",
+ "# Print the columns (colnames)\n",
"image_catalog.colnames"
]
},
@@ -692,7 +539,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Get the meta data\n",
+ "# Get the meta data with .meta\n",
"image_catalog.meta"
]
},
@@ -719,12 +566,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# For our flux column, get the column name, unit, data type, and data\n",
- "print('\\nColumn description: ', total_flux.description)\n",
- "print('Column name: ', total_flux.name)\n",
- "print('Column units: ', total_flux.unit)\n",
- "print('Column data type: ', total_flux.dtype)\n",
- "print('Column data: ', total_flux.data)"
+ "# What are the units for this column?\n",
+ "total_flux.unit"
]
},
{
@@ -750,17 +593,16 @@
"metadata": {},
"outputs": [],
"source": [
- "# Show Dec in deg\n",
- "image_catalog['sky_centroid'].dec.deg"
+ "# Now show the RA in degrees\n",
+ "image_catalog['sky_centroid'].ra.deg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.2.-Combined image\n",
- "\n",
- "Here, we'll take a look at the final combined imaging data product from Stage 3, the ```_i2d``` file. Let's use the imaging simulation from our previous modules. "
+ "### A.-Exercise 1\n",
+ "Now, you try it!"
]
},
{
@@ -769,8 +611,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the final combined 2D image (use: image)\n",
- "image = datamodels.ImageModel(combined_i2d_file)"
+ "# For our flux column, get the column name and description (hint: look at the .)\n"
]
},
{
@@ -779,20 +620,25 @@
"metadata": {},
"outputs": [],
"source": [
- "# Check out the model structure\n",
- "image.info()"
+ "# Now, print the data type, and data for our flux column (hint: dtype, data)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "# Check out the meta data\n",
- "image.meta.instance"
+ "# Show the Dec column data in degrees\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4.2.-Combined image\n",
+ "\n",
+ "Here, we'll take a look at the final combined imaging data product from Stage 3, the ```_i2d``` file. Let's use the imaging simulation from our previous modules. "
]
},
{
@@ -801,8 +647,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# What's the shape?\n",
- "image.data.shape"
+ "# Load the final combined 2D image (use: image)\n",
+ "image = datamodels.ImageModel(combined_i2d_file)"
]
},
{
@@ -811,15 +657,18 @@
"metadata": {},
"outputs": [],
"source": [
- "# Create an image of the final product\n",
- "create_image(image.data, vmin=0, vmax=0.2, title=\"Final combined NIRCam image\")"
+ "# Check out the model structure with .info()\n",
+ "image.info()"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "Let's overlay the catalog, as an example:"
+ "# What's the data shape?\n",
+ "image.data.shape"
]
},
{
@@ -828,9 +677,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# First, get a sigma-clipped mean of the flux to determine the scale for our image (use: clipped flux)\n",
- "clipped_flux = sigma_clip(total_flux, sigma=2, maxiters=5, cenfunc=np.nanmean)\n",
- "np.mean(clipped_flux)"
+ "# Create an image of the final product\n",
+ "create_image(image)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's overlay the catalog, as an example:"
]
},
{
@@ -842,14 +697,14 @@
"outputs": [],
"source": [
"# Create our image with the catalog overlaid\n",
- "create_image_with_cat(image.data, image_catalog, vmin=0, vmax=0.2, title=\"Final image w/ catalog overlaid\")"
+ "create_image_with_cat(image.data, image_catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 5.3.-Combined spectrum\n",
+ "## 4.3.-Combined spectrum\n",
"\n",
"Here, we'll take a look at the final combined 1D spectrum from Stage 3 spectroscopic processing (```_c1d```. Let's use the WFSS simulation from our previous modules. "
]
@@ -860,7 +715,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Load the final combined 1D file (use: spectrum)\n",
+ "# Load the final combined 1D file into a CombinedSpecModel (use: spectrum)\n",
"spectrum = datamodels.CombinedSpecModel(final_c1d_file)"
]
},
@@ -872,7 +727,7 @@
},
"outputs": [],
"source": [
- "# Look at the model structure\n",
+ "# Look at the model structure with .info()\n",
"spectrum.info()"
]
},
@@ -880,24 +735,30 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "scrolled": true
+ "scrolled": false
},
"outputs": [],
"source": [
- "# Look at the metadata\n",
- "spectrum.meta.instance"
+ "# Show the table values \n",
+ "spectrum.spec_table"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
- "# Show me the table values \n",
- "spectrum.spec_table"
+ "# Plot the spectrum with a median filter of 11\n",
+ "plot_spectra(spectrum, median_filter=11)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### A.-Exercise 2\n",
+ "Now, you try it!"
]
},
{
@@ -906,8 +767,176 @@
"metadata": {},
"outputs": [],
"source": [
- "# Plot it up!\n",
- "plot_spectra(spectrum, median_filter=11)"
+ "# Where do I find the spectral order for my combined spectrum?\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Where is the WCS information for the CombinedSpec model? \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# What columns are in the spec_table? \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[Top of Page](#title_ID)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "5.-Bonus: Other observing modes\n",
+ "------------------\n",
+ "\n",
+ "Below are descriptions of the data products for other JWST observing modes that we may not have time to cover in this JWebbinar. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.1.-Aperture Masking Interferometry (AMI)\n",
+ "\n",
+ "Stage 3 processing for calibrated NIRISS AMI observations computes fringe parameters for individual exposures, averages the fringe results from multiple exposures, and, optionally, corrects science target fringe parameters using the fringe results from reference PSF targets.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_ami3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **2D calibrated images**\n",
+ " * **Data model**: ImageModel\n",
+ " * **File suffix**: ```_cal```\n",
+ " * **Description**: Inputs need to be in the form of an association file that lists multiple science target exposures, and, optionally, reference PSF exposures. Individual exposures must be in the form of calibrated (```_cal```) data products from Stage 2 processing. \n",
+ " \n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **Fringe parameter tables**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_ami```\n",
+ " * **Description**: For every input exposure, fringe parameters and closure phases caculated by the ```ami_analyze``` step are saved to a FITS table containing the fringe parameters and closure phases.\n",
+ "\n",
+ "* (optional) **Averaged fringe parameters table**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_amiavg``` or ```_psf-amiavg```\n",
+ " * **Description**: If multiple target or reference PSF exposures are used as input and the ```–save_averages``` parameter is set to True, the ```ami_average``` step will save averaged results for the target in an ```_amiavg``` product and for the reference PSF in a ```_psf-amiavg``` product. \n",
+ " \n",
+ "* **Normalized fringe parameters table**\n",
+ " * **Data model**: AmiLgModel\n",
+ " * **File suffix**: ```_aminorm```\n",
+ " * **Description**: If reference PSF exposures are included in the input association, the averaged AMI results for the target will be normalized by the averaged AMI results for the reference PSF and will be saved to an ```_aminorm``` product file. This file has the same FITS table format as the ```_ami``` products. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.2.-Coronagraphy\n",
+ "\n",
+ "Stage 3 coronagraphic processing is applied to associations of calibrated NIRCam coronagraphic and MIRI Lyot and 4QPM exposures, and is used to produce PSF-subtracted, resampled, combined images of the source object.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_coron3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **3D calibrated images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_calints```\n",
+ " * **Description**: The input to this stage must be in the form of an association file that lists one or more exposures of a science target and one or more reference PSF targets. The individual target and reference PSF exposures should be in the form of 3D calibrated (```_calints```) data products from Stage 2 processing. Each pipeline step will loop over the 3D stack of per-integration images contained in each exposure. \n",
+ " \n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **CR-flagged images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_crfints```\n",
+ " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
+ "\n",
+ "* **3D stacked PSF images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_psfstack```\n",
+ " * **Description**: The data from each input PSF reference exposure are concatenated into a single combined 3D stack for use by subsequent steps. The stacked PSF data are written to a ```_psfstack``` product. \n",
+ " \n",
+ "* **4D aligned PSF images**\n",
+ " * **Data model**: QuadModel\n",
+ " * **File suffix**: ```_psfalign```\n",
+ " * **Description**: For each science target exposure, all of the reference PSF images in the ```_psfstack``` product are aligned to each science target integration and saved to a 4D ```_psfalign``` product. \n",
+ " \n",
+ "* **3D PSF-subtracted images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_psfsub```\n",
+ " * **Description**: For each science target exposure, the ```klip``` step applies PSF fitting and subtraction for each integration, resulting in a 3D stack of PSF-subtracted images. \n",
+ " \n",
+ "* **2D resampled image**\n",
+ " * **Data model**: DrizProductModel\n",
+ " * **File suffix**: ```_i2d```\n",
+ " * **Description**: The ```resample``` step is applied to the CR-flagged products to create a single resampled and combined product for the science target. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5.3.-Time Series Observation (TSO)\n",
+ "\n",
+ "Stage 3 TSO processing is applied to associations of calibrated TSO exposures (e.g. NIRCam TS imaging, NIRCam TS grism, NIRISS SOSS, NIRSpec BrightObj, MIRI LRS Slitless) and is used to produce calibrated time-series photometry or spectra of the source object.\n",
+ "\n",
+ "See the [Read-the-Docs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/pipeline/calwebb_tso3.html#inputs) for detailed information.\n",
+ "\n",
+ "### A.-Input\n",
+ "\n",
+ "The inputs to this stage are listed below.\n",
+ "\n",
+ "* **3D calibrated images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_calints```\n",
+ " * **Description**: The input is in the form of an association file listing multiple exposures or exposure segments of a science target. Individual inputs should be in the form of 3D calibrated (```_calints```) products from Stage 2 (either imaging or spectroscopic) processing. These products contain 3D stacks of per-integration images, and each pipeline step will loop over all of the integrations in each input. Many TSO exposures may contain a large number of integrations that make their individual exposure products too large (in terms of file size on disk) to be able to handle conveniently. In these cases, the uncalibrated raw data (```_uncal```) for a given exposure are split into multiple “segmented” products, each of which is identified with a segment number (see [segmented products](https://jwst-pipeline.readthedocs.io/en/latest/jwst/data_products/file_naming.html#segmented-files)). The input association file includes all ```_calints``` exposure segments. The ```outlier_detection``` step processes a single segment at a time, creating one output ```_crfints``` product per segment. The remaining steps will process each segment and concatenate the results into a single output product that contains results for all exposures and segments listed in the association.\n",
+ "\n",
+ "### B.-Output\n",
+ "\n",
+ "The outputs of this stage are listed below.\n",
+ "\n",
+ "* **CR-flagged images**\n",
+ " * **Data model**: CubeModel\n",
+ " * **File suffix**: ```_crfints```\n",
+ " * **Description**: If the ```outlier_detection``` step is applied, a new version of each input calibrated exposure is created with a data quality array that is updated to flag pixels detected as outliers. These files use the ```_crfints``` (CR-Flagged per integration) file suffix and include the association candidate ID as a new field in the original product root file name.\n",
+ "\n",
+ "* **Imaging photometry catalog**\n",
+ " * **Data model**: N/A\n",
+ " * **File suffix**: ```_phot```\n",
+ " * **Description**: For imaging TS observations, a source catalog containing photometry results from all of the ```_crfints``` products is produced, organized as a function of integration time stamps.\n",
+ " \n",
+ "* **1D extracted spectral data**\n",
+ " * **Data model**: MultiSpecModel\n",
+ " * **File suffix**: ```_x1dints```\n",
+ " * **Description**: For spectroscopic TS observations, the 1D spectral extraction step is applied to all ```_crfints``` products to create a single ```_x1dints``` product containing 1D extracted spectral data for all integrations contained in the input exposures. \n",
+ " \n",
+ "* **Spectroscopic white-light catalog**\n",
+ " * **Data model**: N/A\n",
+ " * **File suffix**: ```_whtlt```\n",
+ " * **Description**: For spectroscopic TS observations, the ```white_light``` step is applied to all of the 1D extracted spectral data in the ```_x1dints``` product to produce an ASCII catalog in ```ecsv``` format containing the wavelength-integrated white-light photometry of the source. The catalog lists the integrated white-light flux as a function of time, based on the integration time stamps. "
]
},
{
@@ -938,10 +967,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "7.-Exercise\n",
+ "7.-Exercise solutions\n",
"------------------\n",
"\n",
- "Now you try it!"
+ "Below are the solutions for [Exercise 1](#exercise-1) and [Exercise 2](#exercise-2). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 1"
]
},
{
@@ -950,8 +986,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many sources are in the source catalog? \n",
- "len(image_catalog)"
+ "# For our flux column, get the column name and description (hint: look at the .)\n",
+ "total_flux.name, total_flux.description"
]
},
{
@@ -960,8 +996,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# How many sources are identified by the pipeline as stars? \n",
- "np.sum(image_catalog['is_star']==False)"
+ "# Now, print the data type, and data for our flux column (hint: dtype, data)\n",
+ "total_flux.dtype, total_flux.data"
]
},
{
@@ -970,8 +1006,15 @@
"metadata": {},
"outputs": [],
"source": [
- "# Where can you find information about the aperture corrections used for the catalog?\n",
- "image_catalog.meta"
+ "# Show the Dec column data in degrees\n",
+ "image_catalog['sky_centroid'].dec.deg"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercise 2"
]
},
{
diff --git a/pipeline_products_session/test_data_download.ipynb b/pipeline_products_session/test_data_download.ipynb
deleted file mode 100644
index cfaca80..0000000
--- a/pipeline_products_session/test_data_download.ipynb
+++ /dev/null
@@ -1,102 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Test data download\n",
- "\n",
- "Using the download cells from notebook 2."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Module with functions to get information about objects:\n",
- "import os\n",
- "import inspect\n",
- "import asdf \n",
- "import pprint\n",
- "\n",
- "# Numpy library:\n",
- "import numpy as np\n",
- "\n",
- "# Scipy tools\n",
- "from scipy.signal import medfilt\n",
- "\n",
- "# Astropy tools:\n",
- "from astropy.utils.data import download_file\n",
- "from astropy.io import fits\n",
- "\n",
- "# The JWST models:\n",
- "from jwst import datamodels"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "rate_file = [\"https://stsci.box.com/shared/static/h30hhwhu4ihlhqjnlhbblx07wnitoytd.fits\", \n",
- " \"example_nircam_imaging_rate.fits\"]\n",
- "rateints_file = [\"https://stsci.box.com/shared/static/jh937bjqodqhfobhpemnbqt4jax6d6j4.fits\", \n",
- " \"example_nircam_imaging_rateints.fits\"]\n",
- "ramp_file = [\"https://stsci.box.com/shared/static/x7d0ldm7bp683p5yyi2buvphjcckujbe.fits\",\n",
- " \"example_nircam_imaging_ramp.fits\"]\n",
- "wfss_rate_file = [\"https://stsci.box.com/shared/static/d5k9z5j05dgfv6ljgie483w21kmpevni.fits\",\n",
- " \"example_nircam_wfss_rate.fits\"]\n",
- "cal_file = [\"https://stsci.box.com/shared/static/8g15cxb3nri47l3bx22mjtdw3yt8xxiv.fits\",\n",
- " \"example_nircam_imaging_cal.fits\"]\n",
- "wfss_cal_file = [\"https://stsci.box.com/shared/static/pqgt98wsjz16av3768756ierahzqn8w7.fits\",\n",
- " \"example_nircam_wfss_cal.fits\"]\n",
- "wfss_x1d_file = [\"https://stsci.box.com/shared/static/fjzq3dm2kgp2ttoptxwe9yfghmxxxz89.fits\",\n",
- " \"example_nircam_wfss_x1d.fits\"]\n",
- "demo_ex_file = [\"https://stsci.box.com/shared/static/6vn402728z12cyx6czdt5hpaxa071aek.fits\",\n",
- " \"example_exercise_cal.fits\"]\n",
- "\n",
- "all_files = [rate_file, rateints_file, ramp_file, cal_file,\n",
- " wfss_rate_file, wfss_cal_file, wfss_x1d_file,\n",
- " demo_ex_file]\n",
- "\n",
- "for file in all_files:\n",
- " demo_file = download_file(file[0])\n",
- " \n",
- " # Save the file so that we can use it later\n",
- " with fits.open(demo_file) as f:\n",
- " f.writeto(file[1], overwrite=True)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.8"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}