Some of the major challenges for achieving high quality NICMOS data
reduction arise from difficulties in removing additive components of the instrumental signature that are present in a raw NICMOS image. For the purpose of discussion here, we will divide these additive components into two categories, bias and dark, according to whether or not the signal is noiseless and purely electronic in origin (bias), or noisy and arising from thermal or luminous sources (dark). In practice, the NICMOS bias and dark signals each consist of several different components which exhibit a range of different behaviors.
In the past, the reference files used for processing NICMOS data, had
the dark and bias components combined together to a single dark image that was used in the DARKCORR step of the calnica pipeline processing. NICMOS dark images (really dark + bias) are highly dependent on the readout history of the array since it was last reset, and therefore cannot be simply rescaled to the exposure time of the science data (as is done with most conventional CCD data). Each science file must be calibrated with a dark frame of equal exposure time and number of readouts.
Such "composite" dark calibration reference files for all
MULTIACCUM readout sequences have been constructed using as basic data the on-orbit darks obtained from calibration (dark monitor) programs. The filenames of the "composite" darks are written to the DARKFILE header keyword. These composite darks have now been superseded by newly created temperature-dependent dark calibration reference files (see Section 4.2.3
for more details). The file names of these "*_tdd" files are written to the TEMPFILE header keyword. If both the tempfile and darkfile keywords are populated, the former will be used.
Unfortunately, some components of the NICMOS bias and dark have
turned out to be unstable or unpredictable, making it difficult or impossible to remove them using the standard reference files. In order to do a good job removing additive dark and bias signatures, it is important to understand their origin and behavior. Here we describe the various components of NICMOS biases and darks in some detail, highlighting their stability or lack thereof, and describing (briefly) how they are incorporated into the standard STScI dark reference images. In Section 4.2.4
below we describe methods and tools for measuring and removing residual dark and bias artifacts from NICMOS images.
The true, thermal dark current is the detector current when no external
signal is present. This component grows linearly with integration time:
is the observed signal in a given readout, t
is time since reset, and dc(x,y)
is the dark current count rate. At the operating temperatures used for NICMOS in Cycle 7, the mean dark current for all three cameras was of order 0.1 e-
/sec. In Cycle 11, when operating at warmer temperatures with the NCS, the dark current has some two dimensional structure, and is roughly a factor of two higher in the corners than at the center. At the higher temperature, the dark current has a “salty” appearance due to a large number of high countrate pixels. In addition, particularly in NIC3, there are large areas of higher dark current across the chip. (See Figure 4.1
Each quadrant of a NICMOS detector has its own readout amplifier,
which is situated close to an exterior corner of the detector. When a readout is made, the amplifier emits radiation which is recorded by the detector, an effect known as amplifier glow (Figure 4.2
). This signal is largest close to the corners of the detector where the amplifiers are situated, and falls off rapidly towards the center of the array. The signal is only present during a readout while the readout amplifiers are powered, but is repeated for each readout (e.g., a MULTIACCUM sequence or an ACCUM with multiple initial and final reads). Typically the extra signal is about 20-30 DN at the corners of the detector and 2-3 DN at the center, for each readout. The signal is highly repeatable, and linearly dependent on the number of reads. This is also dependent on temperature. The amplifier glow also depends on the length of time for which the amplifiers are switched on, which is slightly shorter (by ~14%) for ACCUM mode.
The amplifier glow is a real, detected signal and is subject to photon
statistics, so it is a source of noise in NICMOS exposures. In the processing pipeline and calibration reference files, it is considered to be a component of the dark signal, although its physical origin and temporal dependence is quite different than that of the thermal dark current. Thanks to the repeatability of the signal, images calibrated with the appropriate dark frames (same MULTIACCUM sequence or same exposure time for ACCUM images) will have the amplifier glow removed in the darkcorr step in calnica
. This component grows with number of readouts:
is the cumulative signal due to the glow in a sequence, amp(x,y,T)
is the amplifier glow signal per readout (a function of the pixel location (x,y)
, the amp-on time, and the temperature), and NR
is the total number of readouts of the array since the last reset. In the corners of a full, 26-readout MULTIACCUM response there will be of order 500-800 DN due to amplifier glow, as well as the associated Poisson noise from this signal. This nominal Poisson noise is propagated into the ERR array of the NICMOS calibrated images by calnica
. Note that the temperature dependence of the amplifier glow is only corrected for when using the tempfile dark reference file.
There are three readily identifiable (but not necessarily physically
distinct) components of the NICMOS bias: the detector reset level, shading, and variable quadrant bias or “pedestal.”
First, a net DC bias with a large, negative value (of order -21000 ADU)
is introduced when the detector is reset. This bias is different in each readout quadrant, but is essentially a constant within each quadrant, plus a fixed pattern representing the inherent pixel-to-pixel reset levels. In standard MULTIACCUM processing, this is removed by subtracting the so-called “zeroth readout''
of the image from all subsequent readouts, e.g. in the ZOFFCORR step of calnica
. It is therefore not a component of any calibration reference file, but is removed using the zeroth readout of the science image itself.
is a noiseless signal gradient, a pixel-dependent bias, which changes in the direction of the pixel clocking during a readout. This bias change is caused by a temperature dependence of the readout amplifier bias. The amplifier temperature changes rapidly during the readout of the array. The result is a bias which changes considerably between the time the first and last pixels of a quadrant are read. Visually, this appears as a ripple and a signal gradient across a given quadrant of an uncorrected image (Figure 4.3
). The amplitude of the shading can be as large as several hundred electrons across a quadrant in NIC2, with smaller amplitudes in NIC1 and NIC3. The shading exhibits all the characteristics of a bias change, including lack of noise.
The shading signal is not the same for each readout, but depends
primarily on the time interval since the last readout (not reset) of a pixel. For each readout in a NICMOS MULTIACCUM sequence, this time interval is recorded in the FITS header of each imset by the keyword deltatime. If the time Δt
between reads remains constant, the bias level introduced by the shading remains constant, but if Δt
varies (e.g. logarithmically, as in some MULTIACCUM sample sequences), then the bias level changes with each successive read, and thus the overall shading pattern evolves throughout the sequence.
In addition to the DELTATIM dependence, the shading amplitude and
shape also depend on the mean temperature of the detectors, which slowly warmed as the cryogen sublimated over Cycle 7, and has also shown to vary since the installation of the NCS. See Section 4.2.3
for a discussion on dark reference files that corrects for this temperature dependence. Subtle temperature changes during a MULTIACCUM exposure can also lead to shading changes. A sequence with many long DELTATIMEs (such as a SPARS256) can cool between the first and last reads, resulting in a "DELTATIME=256s" shading that is different in the 25th
read than it was in the 4th
read. Numerically, shading is of the form:
where the shading s
is a function of the pixel location, DELTATIME Δt
and detector temperature T
In addition to the net quadrant bias introduced at array reset, there is
some additional offset which is time-variable and, to some degree, stochastic. This variable quadrant bias has been described as the “pedestal effect”
in many discussions of NICMOS data, although we note here that the term “pedestal” has also been applied to other aspects of NICMOS array behavior. The variable quadrant bias is usually constant over a given array quadrant, but different from one quadrant to another. Its amplitude varies from readout to readout, sometimes drifting gradually, but occasionally with sharp changes from one readout to another (not always seen in all quadrants simultaneously).
On 22 August 1997, a modification was made to the NICMOS flight
software which reduced but did not eliminate the pedestal effect. Data taken before that date is, in general, severely affected by variable bias levels, and requires careful handling in order to achieve high quality data reductions. However, essentially all NICMOS data, even after the flight software change, are impacted by pedestal to one degree or another.
The variable quadrant bias has two major effects on NICMOS
MULTIACCUM data. The first (and generally less important) effect is that the signal in a given pixel, which should normally accumulate linearly with time over the course of an integration (after other sources of bias and dark current are removed, and when intrinsic array non-linearity is corrected), can instead vary irregularly as the bias level in a quadrant changes “underneath” the astronomical signal from source + background. The CRIDCALC step of the calnica
pipeline fits a linear ramp (counts vs. time) to the accumulating signal in the MULTIACCUM to derive the source + background count rate, with a rejection procedure designed to eliminate transient cosmic ray events (see Section 3.3.2
and Section 4.9
). A varying bias level can improperly trigger the CRIDCALC cosmic ray rejection or reduce its sensitivity to real cosmic ray events.
Secondly, the net bias change over the course of the exposure results in
an additive offset (different in each quadrant) when the MULTIACCUM sequence is reduced to a single count rate image (the *_cal.fits
file) by CRIDCALC. When the image is flat-fielded, this undesired, additive offset is then modulated by the flat-field, and appears as an inverse flat-field pattern in the final, reduced data. For illustration, consider an image where the incident astronomical flux (sources plus sky background) is given by S(x,y)
. This is modulated by the spatially dependent quantum efficiency, or flat field, Q(x,y)
. To this is added a quadrant bias offset Bq
, which may be different in each quadrant. Here we neglect all other sources of bias and dark current, assuming that they can be adequately removed by standard processing. The recorded raw image is I(x,y)
If this image were then divided by the flat field (or, to follow the STScI
pipeline convention, multiplied by the inverse flat field), the result would be:
I(x,y) * Q-1(x,y) = S(x,y) + Bq × Q-1(x,y).
Thus, the desired image S(x,y)
is recovered, but an additive, inverse flat-field pattern is also present, with an amplitude that may be different for each quadrant. These inverse flat patterns, along with discontinuities between quadrants, are the typical hallmarks of a pedestal problem in processed NICMOS data (see example in Figure 4.4
image processed normally with calnica
; note the quadrant intensity offsets, and also the residual flat-field pattern imprinted on the data, due to the unremoved bias being multiplied by the inverse flat. Right:
image after processing through pedsky
It is important to note here that a residual flat-fielding pattern may also
arise from reasons completely unrelated to pedestal. In particular, the NICMOS flat fields have a strong color dependence, and the spectrum of the background (especially at longer wavelengths where thermal emission dominates) does not necessarily match that of the lamps used to create the flat fields. Residual patterns may therefore sometimes result from division by standard internal lamp flats, again especially at longer wavelengths in the medium and broad band filters. We return to this point in Section 4.2.4
in the discussion of the pedsky
software routine and again in Section 4.6.2
. Unremoved shading also introduces a bias offset (but a positionally dependent one) which, when multiplied through by the inverse flat field, will create a pedestal-like effect.
The unpredictable nature of this variable quadrant bias means that it is
not possible to remove it with standard reference frames. (In passing, we note that it also considerably complicates the task of generating “clean” calibration reference files of any sort in the first place.) The user must attempt to determine the bias level from the data themselves and subtract it before flat fielding the data. The difficulty, then, is determining the bias level independent of the sky + source signal present in the data. No one method has been developed which does this equally well for all types of NICMOS data. The methods which have been tried depend on the nature of the target being observed, e.g. sparse fields consisting mostly of blank sky are treated differently from images containing large, extended objects or crowded fields. We discuss pedestal removal techniques in Section 4.2.4
Occasionally, spatial bias jumps (sometimes called bands
) are seen in NICMOS images (Figure 4.5
). These are apparently caused by a bias change when the amplifiers of one NICMOS camera are being used at the same time as another is reading out. They are very commonly seen in the last readout of a MULTIACCUM sequence, but may occasionally occur in intermediate readouts as well. A flight software change was made prior to Cycle 11 to help mitigate this problem and as a result it is rarely seen in data taken after January 1, 2000 (Cycle 11 and onwards).
A new set of temperature-dependent dark reference files have been
created that incorporate all aspects of the dark current. These files consist of several extensions, each composing one component of the total dark current. Separate extensions represent the linear dark component, the amp glow, and each of 12 different DELTATIMEs shading component. The temperature-dependent, comprehensive dark current reference file names are recorded in the science header keyword TEMPFILE. For further updates please refer to the NICMOS Web page at:
versions 4.2 or later use these files and is also backwards compatible with older non-temperature dependent reference files. See the calnica
helpfile (DARKCORR section) in STSDAS
for details. Users can recalibrate their data using calnica
and the latest reference files by using either the STSDAS
package in IRAF
, or by requesting the calibrated data products from the HST data archive. The OTFR will use the appropriate dark reference files as the data is extracted from the archive.
Historically, the mounting cup temperatures were used to represent the
best known temperature of the detectors. This temperature is stored in the ndwtmp11 (for NIC1 and NIC2) and the ndwtmp13 (for NIC3) keywords in the *_spt.fits files. The use of the mounting cup temperature for determining appropriate dark components has been superseded by the use of the Bias-derived temperature. See Section 3.3.1
In the center of the NICMOS arrays, where the effects of shading and
amplifier glow are smallest, the uncertainties in the dark reference files are dominated by the readout noise, and to some extent the higher dark current at the NCS operating temperature. The older STScI synthetic darks were typically based on an average of about 15 measurements per readout sample per pixel. Therefore the estimated pixel-to-pixel uncertainties in the dark reference files are of the order of 1 DN (about 5 electrons). In the corners of the arrays the amplifier glow is the largest source of noise, increasing as a function of the number of readouts. For the largest number of readouts (26) the estimated uncertainty is of the order of 5 DN (about 27 electrons). It is important to note that the effect of these "random uncertainties" in the calibration files on science data is not actually "random," however. The pixel-to-pixel noise pattern in the dark reference files is systematically imprinted on all science images from which they are subtracted. This can introduce a sort of "pattern noise" in the images, which is apparently random but actually affects the pixel-to-pixel statistics of reduced data in a systematic way. In general, this is not a limiting source of noise in NICMOS data, but it can set a limit to the pixel-to-pixel noise achievable with images reduced by calnica
using the standard reference files.
In the newer, temperature-dependent dark reference files (TEMPFILE),
a much larger number of dark exposures has been averaged to produce the final product, thus reducing this pixel-to-pixel component of the dark frame "noise" to a lower level.
The dark current pedestal adds some uncertainty to the darks, since
on-orbit dark frames are used to generate the calibration reference files. In essence, the pedestal makes it difficult to establish the absolute DC level of the dark current. However, every effort was made to minimize the effects of the pedestal when making the reference files currently in the database. Persistence (charge trapping, see Section 4.7
) from the amplifier glow following long periods of auto flushing when the detectors are not exposing (such as during Earth occultations) can often be seen faintly in some exposures. This is most often seen as residual "brighter" corners in the first exposure of an orbit if that exposure is long enough to detect it. Usually by the second half of the orbit this has decayed enough that it is not seen. See Section 4.2.4
below for information on mitigating this effect.
All dark images taken with NICMOS are available through the HST
Archive. However, observers wishing to use on-orbit dark data obtained at conditions as similar as possible to the science data, may retrieve darks for the camera of interest with the appropriate MULTIACCUM sequence and the correct number of readouts (NSAMP), process them by subtracting the zeroth readout (ZOFFCORR), and combine them on a readout-by-readout basis with some suitable rejection scheme to eliminate cosmic rays (e.g. median or sigma clipping).
In practice there are several difficulties when doing this. First, on-orbit
darks are affected by pedestal effects. Care must be taken when averaging frames, particularly with sigma rejection schemes, since the DC bias level of a given quadrant in a given readout may vary considerably from image to image. The average dark image will still have some mean pedestal value in it. Second, one should be careful to examine all dark frames used for the average, and to discard images which are adversely affected by bright object and SAA-induced persistent signal (see Section 4.8.2
below). Finally, because shading is temperature dependent, care should be taken to combine darks taken at the same detector temperature (within approximately 0.1 degrees K) as the observations for which they will be used. All of the effects described under the random and systematic uncertainties of the Temperature-dependent darks will generally apply when making dark reference files directly from observed dark observations.
In principle, ACCUM mode allows the user to specify any of a large
number (173, to be precise) of possible exposure times, ranging from 0.57 to 3600 seconds, and up to 25 initial and final readouts (NREAD), even though only values of 1 and 9 are supported by STScI. As was discussed above, the various components of the dark reference files (e.g., bias shading, linear dark current, and amplifier glow) depend not only on the integration time, but on the number of readouts and the readout delta time intervals. Therefore each and every combination of ACCUM exposure time and NREAD requires a unique dark image for calibration, and it was not practical to calibrate all of these on orbit. In addition, as has also been noted, the shading (particularly for NIC2) also depends on the instrument temperature.
At the present time, there are no "standard" dark calibration reference
files available from the HST Calibration Database for use with ACCUM mode data. The dark reference files used for processing ACCUM mode images in the OPUS pipeline were dummies. In principle it is possible to create ACCUM mode darks from the TEMPFILE (temperature-dependent dark) reference files using a procedure similar to that which has been used for MULTIACCUM data by interpolating the shading between the measured delta times in the standard sequences and properly scaling the ampglow. In addition, many individual on-orbit ACCUM mode dark exposures are available from the Archive, and it is not unlikely that for any given ACCUM mode science exposure there will be darks available with the right exposure time (if not necessarily the right temperature). If you need to calibrate ACCUM mode science images, you should search the Archive to see if suitable darks are available, or discuss the matter with your Contact Scientist.
In general, BRIGHTOBJ mode exposures are so short that true linear
dark current is negligible. Moreover, by design they are generally used only for very bright targets, and most sources of dark and bias are relatively unimportant compared to the object signal. The BRIGHTOBJ exposures that have been taken to date show low-level alternating column or row striping (depending on camera) which subtracts well by using either a dark exposure of corresponding length or by creating a median or column/row collapse median (see Figure 4.6
). A set of dark reference files have been created for the exposure time/camera combinations that have been used so far, namely 1, 2, 5, and 10 milliseconds for both NIC1 and NIC2. There are no NIC3 BRIGHTOBJ observations to date. These darks are available upon request and are installed in the calibration database. It should also be noted that BRIGHTOBJ mode is known to be highly nonlinear and a characterization of that nonlinearity is underway, but the mode should currently be considered photometrically unusable. BRIGHTOBJ exposures are however useful for acquisitions of very bright sources (e.g. for the NIC2 coronagraph), as PSF centroiding is still possible, and this has been done successfully in the past.
Several components of the bias/dark may not be adequately removed by
the standard pipeline dark correction for a variety of reasons. In particular, residual shading in NIC2 data, bias drifts and jumps, ampglow persistence and the net pedestal may still be present after standard processing. Here we describe ways of handling each of these.
Even when using the new temperature-dependent dark reference files
some shading and linear dark current residuals can be left in the calibrated data. There are a number of ways to handle temperature driven residuals in calibrated data. The shading residual is likely to be very small given the relatively stable temperature and small shading versus temperature amplitude. In most cases the delta-shading can be assumed to be a constant across the array. Any shape change will be a second order effect. The STSDAS
described in the next section is a good way to do this, essentially treating the residual shading as a “pedestal” term.
Residual dark current is more difficult to handle, as it is extremely
sensitive to small temperature changes. Most of the linear dark component is due to thermal generation and recombination in the bulk HgCdTe. Each pixel can be thought of as an independent diode, and as such each will have its own personalized temperature dependence which varies exponentially with 1/T. In addition, elevated dark current near the corners of the array is likely due to a combination of charge trapped amp-glow signal (ampglow persistence) and localized thermal heating by the output amplifiers. Because of this it is usually not possible to linearly scale and subtract an image of the dark current to minimize residuals. A scale factor that corrects one pixel will be incorrect to apply to a neighboring pixel. It may be possible to model the dark current per-pixel, but limitations in the instantaneous temperature determination and unknown prior readout history make this difficult at present.
A number of alternate techniques have proven useful. For example if a
dataset has a large number of dithers and few sources, or has a set of dithered “sky” images that were taken contemporaneously with the science data, a source-masked median sky image can be constructed from the calibrated data itself. This sky image will then contain both a map of the sky background and the median residual dark current for that set of observations. Beware though that the timescale over which the dark current is changing may be shorter than that over which the data was taken. Also if the number of images available for making such a sky image is small it may be noise limited and its application might actually degrade image quality rather than improve it. The same is true of darks taken as a part of a science program, and in that case the darks must have been take some time before or after the observations themselves. As always with sky images constructed from science data, care must be taken not to subtract data from itself.
Some success has been achieved by trying to match darks to
observations based on previous observational history. For example, if a program is taking two long exposures per orbit, it may be possible to find a set of dark (or sky) images that were taken in an identical manner and construct two separate darks (or skys) - one for the first half of the orbit and one for the second half. This at least gives a correction over two time domains instead of averaging over the entire orbit. This tends to work well at removing the large spatial scale dark current residuals, but the warmest pixels are still poorly corrected because of their higher temperature sensitivity. Only programs with certain observational constraints can take advantage of this.
Another option for dealing with the “salty” and most temperature
sensitive high-countrate pixels is to simply treat them as bad pixels. If the science data was dithered, these pixels can be masked before combination (when averaging or drizzling) and be rejected just as cosmic ray hits would be. The large number of such pixels (see Figure 4.1
) may make this impractical for programs that are sparsely dithered.
Variable quadrant bias or “pedestal” is not removed by standard
processing. STScI has distributed several STSDAS
tools for removing this variable bias level. Here we briefly describe the tools that are currently available in the stsdas.hst_calib.nicmos
task in the stsdas.hst_calib.nicmos
package is designed to remove the changes in quadrant bias level from readout to readout during the course of a MULTIACCUM exposure. It adjusts the bias levels in each NICMOS quadrant so that the net counts in that quadrant increase linearly with time. This “bias equalization” procedure does not
remove the net bias offset that produces the pedestal effect. Essentially, it removes any temporally non-linear components of the bias drift, i.e., the second and higher order time derivatives of the bias, but leaves an unknown linear term in the bias drift (i.e., the first time derivative). The program cannot distinguish between this linear bias drift and an actual, linearly accumulating astronomical signal, and thus leaves the linear bias term in the data so that it must be removed by some other method (see “pedsky”
below). In principle, biaseq
will work on any NICMOS MULTIACCUM image, regardless of the nature of the astronomical sources present, provided that there are enough MULTIACCUM samples available. As a by-product, biaseq
can also attempt to identify and remove bias jumps or bands (see Section 4.2.2
above) from individual readouts.
task must be run on an intermediate image file (*_ima.fits
) which is produced by partially processing a raw NICMOS data set through only the first few processing steps of calnica
. The pipeline processing steps BIASCORR, ZOFFCORR, ZSIGCORR, MASKCORR, NOISCALC, NLINCORR, DARKCORR and BARSCORR should be performed before running biaseq
, but not FLATCORR, UNITCORR or CRIDCALC, i.e., the image should not be flat fielded and should be in units of counts (not counts per second). The nicpipe
task in the stsdas.hst_calib.nicmos
package provides a convenient way to carry out the partial calnica
processing needed as preparation for biaseq
(see example below, and also Section 5.1
task assumes that the astronomical signal (sky plus sources) accumulates linearly with time, and that any non-linear behavior is due to changing bias levels that are constant within each array quadrant, except perhaps for bias jumps. If these assumptions are not correct, then the task may not work properly. For example, if the sky background is changing with time, either because it is dominated by variable thermal emission or because of scattered earthlight (see Section 4.10
), then the routine may not function correctly. The nicmos
may be used to compute and graph data statistics versus time or readout number, which can help to identify time-varying background levels. Objects which saturate the NICMOS array will also no longer accumulate signal linearly with time. However, in this case, unless a large fraction of the pixels in a quadrant are saturated, there should be no noticeable effect on biaseq
. Also, residual shading may result in a bias which changes from readout to readout but is not constant across the quadrant, and this may also cause problems for biaseq
. In particular, residual shading can improperly trigger the bias jump finding algorithm. If there is residual shading in the images, it should be removed with a temperature-dependent shading correction (see “Residual Dark Current and Shading”
) before running biaseq
. If biaseq
is run without doing this, the user should at least disable the bias jump finding option. The biaseq
help pages give further information about this task and its parameters.
For NICMOS images of relatively blank fields, free of very bright or
large sources which fill a substantial portion of the field of view, the pedsky
task may be used to measure and remove an estimate of the sky background and quadrant-dependent residual bias (or “pedestal”). The task depends on having a large fraction of the image filled by “blank” sky, and thus may not work well for images of large, extended objects or very crowded fields.
runs on a single science image (i.e. not on all the separate readouts of a NICMOS MULTIACCUM file). It operates only on the [SCI,1]
extension, which is appropriate when the task is run on, e.g., the *_
cal.fits images that are the final product of the calnica
calibration pipeline. The task’s internal algorithms operate on an unflat-fielded image, but a fully-calibrated (including flat fielding) image may be used as input, as the task will check the status of the flat fielding (via the value of the FLATDONE keyword in the input image header) and will temporarily remove and, at the end of processing, reapply the flat field if necessary. Note, however, that the FLATFILE used for the processing must
be available locally in order to run pedsky. Therefore if you wish to use this task on reduced data taken from the HST
Archive, be sure to retrieve the appropriate flat-field reference file as well.
Following the discussion in Section 4.2.2
above, let us say that a NICMOS image I(x,y)
may be described as
is the incident astronomical flux (sources plus sky background), Q(x,y)
is the flat field, and Bq
is the quadrant-dependent bias offset. The pedsky
task works by minimizing the quantity
X2 = Σxy (Ixy - S × Qxy - Bq)2
which is a measure of the total image variance, as a function of the sky
and four quadrant bias levels Bq
. Here we have made the simplifying assumption that the “true” incident flux S(x,y)
in a relatively blank-field image can be approximated as a constant sky background S
, i.e. that there are no sources present. In real data where there are real sources, the quantity X2
includes a contribution due to the presence of actual objects in the image, above the assumed uniform, constant sky level S
. The impact of these sources is minimized, however, by using sigma-clipped statistics which exclude pixels with strongly deviant values, e.g. those due to actual astronomical sources, bad pixels, etc. Additionally, the user may apply a ring median filter to the image when computing X2
. This can effectively remove compact or point-like sources, and may help the task perform better for moderately crowded fields, but considerably slows the computation speed. The user may wish to experiment by trying pedsky
both with and without the ring median option. In practice, a certain amount of “source noise” contribution to X2
is tolerable to the pedsky
algorithm. It acts as an offset to the amplitude of X2
, but generally has no effect on the location of the minimum value for X2
relative to S
works in both interactive and non interactive modes. Alternatively, the user can supply a sky value to be subtracted, in which case the remaining quadrant-dependent pedestal is estimated and subtracted. After pedsky
processing, the remaining standard calibration steps, including flat fielding, can then be easily applied using another call to the script nicpipe
Note that, in principle, any image may be used to represent the spatial
structure of the sky, i.e., you do not need to use a standard NICMOS flat-field reference file. In particular, for some NICMOS images the two-dimensional structure of the sky may not exactly resemble that of the flat field. This may happen for at least two reasons. First, as will be discussed below (Section 4.6.3
), the NICMOS flat fields are strongly color dependent, and the spectrum of the internal flat-field lamps does not necessarily match that of the sky background, especially at longer wavelengths (> 1.8 μm) where thermal emission begins to dominate over the zodiacal sky. The result is that the spatial structure of the sky may not be quite the same as that of the flat field, and that the sky multiplied by the inverse flat reference file may have some residual structure which correlates with the flat-field pattern and contributes to the total image variance X2
measured by pedsky
. This can confuse pedsky
, resulting in improper sky and pedestal measurement. This problem is most important for images dominated by thermal background, but may also affect shorter wavelength data, especially for NIC3 data where the ratio of background counts to quadrant bias offset amplitude is larger than for the other two cameras. Second, at longer wavelengths, the thermal background generated within the telescope may illuminate the detector differently than does the zodiacal sky, and thus the overall background may not match the QE response pattern measured in the flat field.
In cases like these, you may want to provide your own “sky” images for
use by pedsky
, rather than rely on the flat-field reference files to represent the shape of the sky. One possibility would be to use sky frames constructed from the median of many dithered science or background exposures. This can sometimes improve the quality of the sky + pedestal fitting even for data taken at shorter wavelengths. This was the approach taken for STScI reductions of the HDF-South NICMOS data, for example. Another possibility, especially for long wavelength data, might be to use a specially-constructed color-dependent flat field (see Section 4.6.3
). The pedsky
help pages give further information about this task and its parameters, including guidance for how to use images other than the flat field as the sky model.
As described above, the pedsky
task requires lots of “blank” sky to be effective, and will only work on relatively sparse NICMOS images. The pedsub
task provides an alternative method for images which contain larger objects that fill the field of view. The basic methodology for pedsub
is essentially the same as that of pedsky
, modeling the image as the sum of a constant (per quadrant) pedestal offset plus an astronomical signal (sky + objects) that is modulated by the flat field, and then loops over a range of trial values for the pedestal signal, searching for the amplitude which minimizes pixel-to-pixel variations. Unlike pedsky
, however, pedsub
can optionally apply a spatial filtering function to each trial image in order to remove unwanted features or spatial frequencies (i.e., the signal from objects) that might bias the calculation of the pixel value spread. The filtering options are “median” and “mask,” which essentially carry out low-pass and high-pass filtering of spatial frequencies. The “mask” option removes all large-scale structure from the trial image, leaving the RMS minimization process to operate only on the small-scale, pixel-to-pixel component of the flat-field signal. This option can be effective when trying to measure and remove pedestal from, e.g., images of large galaxies.
has many other parameters and options which are fully described in the STSDAS
help pages for the task. These should be consulted carefully before using the task, and the user may wish to experiment with combinations of these parameters in order to achieve the best results.
Here we illustrate the use of the nicpipe
tasks with one example. A similar sequence could be used with pedsub
substituted for pedsky
. The raw data frame is n4ux23x0q_raw.fits
. This is a NIC3 image taken with the SPARS64 readout sequence and NSAMP=25. We might start by using the sampinfo
task (see Section 5.1
) to look at the details of the readout sequence.
We see that the readouts with linearly spaced DELTATIME values
(SAMPNUMs 4 through 24) are in imsets 1 to 21. The last readout (imset 1) often has bias jumps (see “Bias Jumps or Bands”
), so we may want to exclude it when feeding the desired range of sky samples to biaseq
for use in constructing the “clean” sky image. So:
The output image n4ux23x0q_beq.fits
has been corrected for non-linear bias drifts and for spatial bias jumps as well.
Next, we complete the pipeline processing using nicpipe
, doing the FLATCORR, UNITCORR, CRIDCALC steps to prepare for pedsky
We now run pedsky
non-interactively, letting it fit for the sky level and four quadrant biases on its own.
The end product, n4ux23x0q_ped.fits,
is the fully processed and pedestal corrected image.
In addition, there are other, “freelance” packages for NICMOS data
reduction. One example is Brian McLeod’s NICRED
package (McLeod 1997, in the proceedings for the 1997 HST
Calibration Workshop, ed. S. Casertano et al., p. 281), which offers a general suite of NICMOS data reduction tools, including routines which estimate and subtract pedestal. Ultimately, there are similarities in the pedestal removal methods used by pedsky
, and the NICRED
algorithms, and therefore it should be possible, in principle, to unify them in a single task.