Hubble Space Telescope NICMOS STScI Analysis Newsletter 7
NICMOS STScI Analysis Newsletter 7
- NICMOS NEWS
+ New NICMOS Information on the WWW
+ Change in the FOM ytilt for NIC3
+ CALNICA Version 3.1 Now Available
- DATA ANALYST POSITIONS AVAILABLE AT STScI
- APPENDIX: NICMOS CONTACTS
New NICMOS Information on the WWW
Similar to all Space Telescope Instruments, the NICMOS home page on
the World Wide Web is updated with all instrument developments. The
NICMOS home page can be found by visiting the STScI WWW page
(http://www.stsci.edu/) and following Observing links to the NICMOS
Since last month, the following items have been new POSTS in the
ADVISORIES and other pages:
* NICMOS Reference Files List PAGE has been updated (15 January 1998)
* NICMOS Instrument Status *PAGE* has been added to the NICMOS pages.
* AAS January 1998 -- NICMOS Posters *PAGE* has been added to the NICMOS
Since last month, the following Instrument Science Reports have been made
available on the NICMOS documentation Web Page:
* NICMOS Distortion Correction.
* NICMOS SAA Contour Test: Results from SMOV data.
* NICMOS Camera 2 Coronagraphic Acquisition.
* Analysis Tools for the New Instruments: NICMOS and STIS images. I.
Change in the FOM ytilt for NIC3
On day 97.342:00:00 UT (8 Dec 1997), the nominal FOM ytilt for NIC3 was
changed from 0 to +16" to reduce the effect of the "warm vignetting"
near the -Y edge of the detector. When this change was made, a
bug in ground system that populates the astrometry information in the
image headers - the World Coordinate System (WCS) - caused all NIC3 (prime)
headers to reflect a pointing that is 16" off in the +Y direction from
where the spacecraft was actually pointed. The bug was corrected and
the fix went into the data pipeline on day 98.009:15:46 UT (9 Jan 1998).
If your NIC3 data was taken between 8 Dec 1997 00:00 UT and 9 Jan 1998
15:46 UT, then the WCS information in your headers is incorrect by 16".
To correct this error in the WCS of a given NIC3 image, simply add
16.0/0.203078 = 78.787 pixels to the CRPIX2 keyword value in the image
header. This shifts the Y value of the reference pixel (where CRVAL1, CRVAL2
is defined to be) to remove this 16" error in the header astrometry
CALNICA Version 3.1 Now Available
Over the past couple of months two upgraded versions of the CALNICA calibration
software have been implemented in the routine OPUS processing of NICMOS data.
CALNICA version 3.0 was installed in OPUS on 11 November 1997, and version 3.1
was installed on 23 December 1997. You can determine which version of CALNICA
was used to process your data by looking at the value of the "CAL_VER" keyword
in primary header of your _ima or _cal data files. The upgrades offer
substantial improvements over CALNICA v2.3, so if you have older data you are
encouraged to download the new software from STScI and reprocess your data.
Instructions for retrieving and installing CALNICA v3.1 can be found at
http://ra.stsci.edu/calnica3_1.html, or follow the links from the main STScI
web site to the STSDAS pages.
Version 3.0 contained several substantial changes to the calibration
processing. Briefly, we have incorporated a correction for signal deposited
in the zeroth readout of MultiAccum sequences by very bright sources, modified
the cosmic-ray rejection routine (CRIDCALC) for MultiAccum exposures in order
to eliminate the spurious effects that were still present in version 2.3, and
we have added the use of additional command-line arguments to allow users to
set the cosmic-ray rejection and zeroth-read correction thresholds. Version 3.1
contains only a minor change, which was needed to support images produced by
the new camera 2 target acquisition strategy. Each of these is described in
First, a new calibration step has been added which corrects for signal from
bright sources that is present in the zeroth readout of a MultiAccum exposure.
Because there is an interval of 0.203 seconds between the time the detector
pixels are reset and the time that the zeroth read occurs, some amount of
signal can be accumulated in the zeroth read from very bright sources. This can
cause a couple of problems for subsequent calibration steps. Once the zeroth
read image is subtracted from all other readouts (by the CALNICA ZOFFCORR step)
the record of the absolute level of charge in each pixel at the time of each
readout is gone. All that is left is the amount of charge accumulated between
the zeroth read and the subsequent readouts. This can lead to incorrect
linearization correction (in the NLINCORR step), which depends on the absolute
charge level in each pixel.
The new calibration step, known as ZSIGCORR (Zeroth read SIGnal CORRection),
is applied to MultiAccum exposures before the zeroth read image is subtracted.
The ZSIGCORR step computes an estimate of the signal present in the zeroth
read and saves that information for later use by the NLINCORR step. The
ZSIGCORR step estimates the zeroth read signal by forming a difference image
of the first and zeroth readouts, dark-subtracting the difference image (to
remove shading and amp glow), and then scaling the resulting counts by the
ratio of the exposure times for the zeroth and first reads. This produces an
image of the estimated number of counts that would have been present in the
zeroth readout. Regions of this difference image which are below the 5-sigma
noise level are blanked out, so that only those pixels that contain real signal
are retained. This image is then used in the NLINCORR step by temporarily
adding the amount of zeroth read signal to the pixel value in each subsequent
(now zeroth-read subtracted) readout before applying the linearization
correction. After applying the linearization correction, the offset of the
zeroth read signal is again removed so that all counts are again relative to
the time since the zeroth readout.
The other area where the signal in the zeroth read can cause problems is for
those targets that are so bright that the signal is already approaching
saturation in the zeroth or first reads. The NICMOS detectors tend to behave
in such a way that pixel values begin to decrease after the onset of
saturation, so if saturation is already occuring in either the zeroth or
first reads, the subtraction of the zeroth read (ZOFFCORR step) can lead to
very small or even negative pixel values in subsequent readouts. Since the
saturation checking that occurs in the NLINCORR step again depends on the
absolute number of counts in a pixel, pixels with erroneously small values do
not get flagged as saturated. Therefore the ZSIGCORR step also performs a
check of the absolute pixel values (i.e. before subtracting the zeroth read)
in both the zeroth and first reads and will flag as saturated any that have
values above their defined saturation limits recorded in the NLINFILE
[node,2] image extension. The number of pixels detected as saturated in the
zeroth and first reads by the ZSIGCORR step is reported as part of the CALNICA
processing log (_trl file).
The process of saturation checking in the zeroth and first reads requires the
use of a new image extension, "ZSCI", in the NLINFILE reference file.
Therefore in order to use CALNICA v3.0 or higher you must also retrieve the
latest NLINFILEs which contain this new extension. You can retrieve these
files from the NICMOS Calibration Resources area on the STScI web site.
NOTE: At the current time the ZSIGCORR step is NOT controlled by a calibration
switch keyword in the _raw image header in the way that all other steps are
controlled. It is automatically applied whenever a MultiAccum dataset is
processed with the NLINCORR step set to PERFORM. We will soon be adding the
appropriate "ZSIGCORR" and "ZSIGDONE" keywords to NICMOS image headers so that
this step can be turned on or off as desired. In the meantime, if you are
interested in seeing what sort of zeroth read signal is being computed by
CALNICA, you can create a rough estimate of the ZSIGCORR zeroth read image by
doing the following. First, rerun the dataset through CALNICA with only the
ZOFFCORR, MASKCORR, BIASCORR, NOISCALC, and DARKCORR steps set to PERFORM; set
NLINCORR, FLATCORR, UNITCORR, and CRIDCALC to OMIT. Now perform the following
operations in IRAF:
cl> imcopy _ima.fits[sci,] _zsig.fits
cl> imreplace _zsig.fits 0 upper=27
cl> imarith _zsig.fits * 0.67 _zsig.fits
The imcopy operation simply copies out the science image from the first
readout (which has an EXTVER value of NSAMP-1) into a separate image for
working with. The imreplace step sets all pixels in that image that are below
a value of 27 DN to zero. This is equivalent to ZSIGCORR blanking out all
pixels that are below the 5-sigma noise level. The value of 27 DN comes from
the 30 electron readnoise, which is equivalent to about 5.5 DN, multiplied
by 5. The imarith step scales the counts in the zsig image by the ratio of
the exposure times for the zeroth and first reads, which for all MultiAccum
sequences except "SCAMRR" is 0.203/0.303 = 0.67. The resulting image will be
an approximation of what ZSIGCORR would've calculated for the counts in the
The second substantive change in CALNICA v3.0 was to the CRIDCALC algorithm
applied to MultiAccum exposures. While the changes that were incorporated in
v2.3 were an improvement, the routine was still giving poor results in some
situations by incorrectly identifying, and subsequently rejecting, good data
as bad. The previous version of the routine would first compute the differences
of one readout from the next in a MultiAccum sequence, and identify as outliers
any samples that were n-sigma away from the weighted mean of the differences.
Once it was done identifying outliers, the remaining good samples were added
back together to once again form a sequence of accumulated counts vs. exposure
time, and then a linear fit was performed to this accumulated data to compute
the final countrate for each pixel. The new version of the routine eliminates
the step of computing differences of all the samples, and instead performs the
linear fit directly to the "uncleaned" samples of accumulated counts versus
exposure time. Outliers are identified relative to this fit, removed from the
accumulated counts for subsequent samples, and the fit is repeated. This
continues until no new samples are rejected and the final fit is used to
compute the countrate for each pixel.
The third major change in v3.0 is the addition of the command-line arguments
"crthresh" and "zsthresh", which can be used to set the cosmic-ray rejection
threshold and the zeroth-read signal detection threshold, respectively. The
default values are 4.0 and 5.0 sigma, respectively. In order to use an
alternative value for the cosmic-ray threshold, for example, execute CALNICA
from within IRAF as follows:
cl> calnica crthresh=6.5
or, if you don't want to specify an output name,
cl> calnica "" crthresh=6.5
The rejection threshold used in the CRIDCALC step will be reported in the
One other change that many users may notice is that the processing order for
MultiAccum readouts has been reversed, in order to support some of the other
modifications. Previously, the MultiAccum readouts were processed in reverse
chronological order, starting with the last readout and ending with the zeroth
readout (the arrow of time had been reversed!). In version 3.0, processing
begins with the zeroth read and ends with the final readout.
The only change from v3.0 to v3.1 was the recognition of 2 new filename
suffixes which are now being used for some of the images produced by Camera 2
target acquisition observations. In addition to the 2 ACCUM images normally
produced, the new sequence also obtains 2 internal flat field ACCUM images
and 2 sky background ACCUM images. The normal target images are written to
the standard _raw.fits file, while the flatfield and background
images are written to _rwf.fits and _rwb.fits files,
respectively. CALNICA v3.1 recognizes the new "rwf" and "rwb" input filename
suffixes and produces calibrated output files with corresponding suffixes of
"clf" and "clb", respectively.
Data Analyst Position Available at STScI
The Space Telescope Science Institute currently has openings for
Data Analysts. Data Analysts in the Science Support Division help General
Observers and Archive Researchers analyze HST data, work with Instrument
Scientists in calibrating the HST instruments, and work with STScI staff
on grant-supported research projects. These research projects span a range
of size scales from comets and planets to the large scale structure of
the universe and a range of wavelengths from radio to X-ray astronomy.
Applicants should possess a B.S. degree (M.S. degree a plus) in astronomy
or physics, or equivalent; experience with astronomical resesarch; familiarity
with scientific computing; expertise in data analysis; knowledge of IRAF, IDL
or other software packages for astronomical data analysis; and programming
ability. Additional mathematical, statistical, and computer skills are
desirable. Candidates should have the ability to work with a minimum of
direction, enjoy research, and possess skills to develop excellent working
relationships. Candidates should send a cover letter with current curriculum
vitae and the names of three references to:
Human Resources Manager
Space Telescope Science Institute
3700 San Martin Dr.
Baltimore, MD 21218
Women and minorities are strongly urged to apply. AAE/EOE.
APPENDIX: NICMOS Contacts
Any questions about the scheduling of your observations should be
addressed to your Program Coordinator. Post-Observation questions can
be addressed to your Contact Scientist. If you do not know who these
persons are, you can find the information on the WWW at
Analysis, STSDAS or any other HST-related questions can also be
addressed to email@example.com.
To subscribe or unsubscribe send a message to firstname.lastname@example.org with
the Subject: line blank and the following in the body:
[un]subscribe nicmos_news YOUR NAME
Comments, questions, suggestions, etc. can be e-mailed to email@example.com.
The Space Telescope Science Institute is operated by the Association of
Universities for Research in Astronomy, Inc., under NASA contract