The IR data reduction process begins with the raw IR image file. This contains all the non-destructive readouts from an exposure, stored in reverse time order - where the first extension corresponds to the last array read. Most of the calibration steps are applied independently to each readout. For example, the DQICORR
, and FLATCORR
steps apply the same bad pixel flags, non-linearity correction coefficients, and flat-field image, respectively, to each readout. On the other hand, the CRCORR
step, which performs the up-the-ramp fit and removes the effects of cosmic rays hits, utilizes the values from all readouts of individual pixel simultaneously. Detailed descriptions of each step are provided in the following sections.
All steps up through UNITCORR
are applied to an in-memory image stack that contains all the readouts. The CRCORR
step produces an additional single image that gives the best-fit count rate for each pixel. The remaining steps in the process - FLATCORR
and image statistics - are then applied to the full stack of readouts and to the single image produced by CRCORR
shows a schematic representation of all the IR calibration steps, which are also briefly summarized below, in the order they are performed, with the corresponding calibration switch keyword in parenthesis:
At the beginning of an IR observation the detector pixels are reset and then read out to record the bias level. An interval of approximately 2.9 seconds elapses between the time each pixel is reset and then read. Because the IR channel does not have a shutter, signal from the field of view under observation, as well as persistent signal from previous observations, accumulates during that 2.9 second interval. When the initial (or ‘zeroth’) read is later subtracted from subsequent readouts, any signal in the zeroth read will also be subtracted. Because linearity correction and saturation checking, performed in the NLINCORR
step described in
, both depend on the absolute signal level in a pixel at the time it was read, the signal in the zeroth read from bright sources can be large enough that, if neglected in the NLINCORR calibration step, it can lead to inaccurate linearity corrections, as well as the failure to detect saturation conditions. The ZSIGCORR
step is used to estimate the amount of source signal in the zeroth read and to supply this estimate to the NLINCORR
|pixels which contain more signal than ZTHRESH
*noise are flagged (flag value = 2048) and the estimated zeroth read signal is passed to the NLINCORR
step, which accounts for that signal when applying linearity corrections and saturation checking on the zeroth-read subtracted images. Pixels with signal below ZTHRESH
*noise are ignored.
file has an extension with saturation values for each pixel which is referenced here. Pixels which are saturated in the zeroth or first reads are flagged in the DQ and the number of found saturated pixels are reported.
step estimates the source signal in the science zero read by subtracting the super zero read from the science zero read instead of calculating an estimated signal based on the first read and zero read + estimated exposure time between them (this was the case prior to Mar-2011). This way the difference in readout time for subarrays is not an issue, and also dark current subtraction is no longer necessary for the signal estimate (the DARKFILE
is no longer used by this step).
step uses the reference pixels located around the perimeter of the IR detector to track and remove changes in the bias level that occur during an exposure. For each raw readout, the average signal level of the reference pixels is computed, subtracted from the image, and recorded in the MEANBLEV
keyword in the SCI header (a resistant mean algorithm is used to compute such average).
As with the UVIS overscan correction, the boundaries of the reference pixel regions that are used in the computation are defined in the OSCNTAB
reference table, in the BIASSECT
* columns. The BIASSECTA[1,2]
values indicate the starting and ending column numbers for the reference pixels on the left edge of the image, and the BIASSECTB[1,2]
give the values for the right side of the image.
The reference file for bias level correction, OSCNTAB
, is selected based on the value of the DETECTOR
step subtracts the zeroth read from all readouts in the exposure, including the zeroth read itself, resulting in a zero-read image that is exactly zero in the remainder of processing. The zeroth-read image is propagated through the remaining processing steps and included in the output products, so that a complete history of error estimates and data quality (DQ) flags is preserved.
|Header Switch: NOISCORR
(not listed explicitly in image header, see text)
This step computes an estimate of the errors associated with the raw science data based on a noise model for the detector. The NOISCORR
keyword is not user-accessible and always set to PERFORM
. Currently the noise model is a simple combination of detector read noise (RN) and Poisson noise in the signal, such that:
where the read noise is in units of electrons, gain is the analog-to-digital conversion gain factor (in electrons DN-1
) and counts is the signal in a science image pixel in units of DN. The detector read noise and gain are read from the CCDTAB
reference file and use separate values for the particular amplifier quadrant with which each pixel is associated.
Throughout the remaining calibration steps the ERR
image is processed in lock-step with the science (SCI) image, getting updated as appropriate. Errors are propagated through combination in quadrature. The ERR
array for the final calibrated flt image is populated by the CRCORR
step, based on the calculated uncertainty of the count rate fit to the MultiAccum samples (see
reference file used in this step is selected based on the value of the DETECTOR
, and c4
are the correction coefficients, F is the uncorrected flux in DN and Fc
is the corrected flux. The current form of the correction uses a third-order polynomial, but the algorithm can handle an arbitrary number of coefficients. The number of coefficients and error terms are given by the values of the NCOEFF
keywords in the header of the NLINFILE.
The signal in the zero read is temporarily added back to the zeroth read image of the science data before the linearity correction is applied and before the saturation is judged. Once the correction has been applied the signal is once again removed. This only occurs if the ZSIGCORR
step is set to PERFORM
. Saturation values for each pixel are stored in the NODE
extension of the NLINFILE
. After each group is corrected, the routine also sets saturation flags in the next group for those pixels that are flagged as saturated in the current group. This is necessary because the SCI image value of saturated pixels will sometimes start to go back down in the subsequent reads after saturation occurs, which means they could go unflagged by normal checking techniques. The SAMP
arrays are not modified during this step. The NLINFILE
reference files is selected based on the value of the DETECTOR
step subtracts the detector dark current from the science data. The reference file listed under the DARKFILE
header keyword is used to subtract the dark current from each sample. The DARKFILE
reference file must have the same values for the DETECTOR
, and SUBTYPE
keywords as the science image. Due to potential non-linearities in some of the signal components, such as reset-related effects in the first one or two reads of an exposure, the dark current subtraction is not applied by simply scaling a generic reference dark image by the exposure time and then subtracting it. Instead, a library of dark current images is maintained that includes darks taken in each of the available predefined MULTIACCUM
sample sequences, as well as the available sub-array readout modes. The dark reference file is subtracted read-by-read from the stack of science image readouts so that there is an exact match in the timings and other characteristics of the dark image and the science image. The subtraction does not include the reference pixels. The ERR
and DQ arrays from the reference dark file are combined with the SCI and DQ arrays from the science image, but the SAMP
arrays are unchanged. The mean of the dark image is saved to the MEANDARK
keyword in the output science image header.
: the inverse sensitivity in units of erg cm-2 A-1 electron-1
: the inverse sensitivity in units of Jy sec-1 electron-1
: the bandpass pivot wavelength in Å
: the bandpass RMS width in Å
This step converts the science data from a time-integrated signal to a signal rate by dividing the SCI
arrays for reach readout by the TIME
array. No reference file is needed. The BUNIT
keyword in the output data header reflects the appropriate data units. This step is skipped if the BUNIT
value is already COUNTS/S
. The flat fielding process (performed if FLATCORR
is set to PERFORM
), further changes the BUNIT
by multiplying the image by the gain, therefore the final BUNIT
value depends on the value of both UNITCORR
, as illustrated in
combines the data from all readouts into a single image and, in the process, identifies and flags pixels suspected of containing cosmic-ray (CR) hits. The method is extensively described in
Fixsen et al. (2000)
The data from all readouts are analyzed pixel-by-pixel, iteratively computing a linear fit to the accumulating counts-versus-exposure time relation. Samples flagged as bad in the DQ arrays, such as when saturation occurs midway through the exposure, are rejected from the fitting process. CR hits are identified by searching for outliers from the fit results. The rejection threshold is set by the value in the CRSIGMAS
column of the Cosmic-Ray Rejection parameters reference table CRREJTAB
, and has a default value of 4. When a CR hit is detected, a linear fit is then performed independently for the sets of readouts before and after the hit; if a CR hit is identified to have occurred during a sample, the value measured for that sample is included in the ‘after’ ramp segment. Those fitting results are then again checked for outliers. This process is iterated until no new CR are detected.
These “negative CR hits” are also identified in the CRCORR
step, and flagged with the SPIKE
DQ flag, value = 1024. Appendix B of
WFC3 ISR 2009-40
gives a possible explanation for a sub-class of such events: they are normal cosmic rays that traverse the detector but instead of hitting the photo-sensitive HgCdTe pixel bulk, they hit other parts of the pixel (e.g. the electronics) that are sensitive to the charged particles. These events are clearly identified as CRs as a physical trail is visible in the raw images, a trail that goes from negative (when the CR goes through the electronics) to undetectable (where they go through layers that are not affected by the CR) to positive (when they release charge in the active HgCdTe region of the pixel), see
. Other negative spikes are sometimes observed in isolated pixels and are attributed to “burst noise” (also called popcorn noise or random telegraph signal).
The basic rule of thumb is that in order for a DQ value to propagate into the flt
, it needs to be present in all the reads of the ima
. The 8192 flag does not get propagated because it shows where during the ramp the cosmic ray appeared. Nominally, when looking at the flt file, calwf3
has already accounted for the effects of the cosmic ray. If a user really needs the information about those cosmic rays, the ima files are available and contain the complete record of when and where exactly each cosmic ray hit the detector.
A similar propagation scheme occurs for the saturation flag (DQ = 256). If, e.g., a pixel is saturated in the last two reads of a ramp, then those two reads are flagged with 256 in the ima file, and calwf3
ignores them during line-fitting. The DQ value put into the flt file is 0 because calwf3
has already dealt with the saturation and the effects are not in the flt. If saturation occurs in the first read of a ramp, the SCI extension of the flt file for that pixel contains an estimate of the flux equal to the value in the input zeroth read image, but the DQ extension of the flt does not get a 256 value added to it. If the zeroth read is also saturated, the flt file still contains the same flux estimate as in the first-read saturation case, but in this case the DQ flag 256 gets carried over to the output flt DQ extension.
For pixels where calwf3
finds 4 or more CRs up the ramp, it flags the pixel with the UNSTABLE flag value = 32, which does propagate to the flt. In that case, the thinking is that four large signal jumps for a given pixel in a single ramp are a warning sign about the behavior of that pixel, and its measurements should not be trusted. DQ values from any sample are carried through to the flt file if a pixel has no good samples.
step corrects for pixel-to-pixel and large-scale sensitivity variations across the detector by dividing the science images by one or more flat-field images. A combined flat is created within calwf3
using up to three flat-field reference files: the pixel-to-pixel flat (c), the low-order flat (LFLTFILE
), and the delta flat (DFLTFILE
also multiplies the science data by the detector gain, using the mean gain from all the amplifiers. Therefore the calibrated data will be in units of electrons per second (or electrons if UNITCORR
is a pixel-to-pixel flat-field correction file containing the small-scale flat-field variations. The PFLTFILE
is always used in the calibration pipeline, while the other two flats are optional. The LFLTFILE
is a low-order flat that corrects for any large-scale sensitivity variations across the detector. This file can be stored as a binned image, which is then expanded when being applied by calwf3
. Finally, the DFLTFILE
is a delta-flat containing any needed changes to the small-scale PFLTFILE
. If the LFLTFILE
are not specified in the SCI header, only the PFLTFILE
is used for the flat-field correction. If two or more reference files are specified, they are read in and multiplied together to form a combined flat-field correction image.
All flat-field reference images are selected based on the DETECTOR
, and FILTER
used for the observation. A sub-array science image uses the same reference file(s) as a full-size image; calwf3
extracts the appropriate region from the reference file(s) and applies it to the sub-array input image.
This step computes several useful image statistics using the “good pixels”, i.e. those with DQ value equal to 0, and updates several keywords in the image header. This operation is performed for every readout in the calibrated MultiAccum stack, as well as the final (CRCORR
-produced) calibrated image. The updated keywords are: the minimum, mean, and maximum values (GOODMIN
, respectively), as well as the minimum, mean, and maximum signal-to-noise ratio (the ratio of the SCI and ERR pixel values) which are SNRMIN
, respectively. The number of good pixels, NGOODPIX
, is also recorded. All these quantities are updated in the SCI image headers. The minimum, mean, and maximum statistics are also computed for the ERR arrays.
Associations with more than one member, which have been associated using REPEAT-OBS
, are combined using wf3rej
for more details). CR-SPLIT
is not used for the IR channel. The task uses the same statistical detection algorithm developed for ACS (acsrej
), STIS (ocrrj
) and WFPC2(crrej
), providing a well-tested and robust procedure. For all associations (including dithered observations), the DRZ products will be created by Astrodrizzle
, which performs both cosmic ray detection (in addition to wf3rej, for REPEAT-OBS
observations) and corrects for geometric distortion.