Note: For non-polarimetry cases use only AIS_CORR; if both are set to PERFORM, AIS_CORR overrides FLX_CORR as a safeguard. For polarimetry use FLX_CORR; here it will override AIS_CORR should both be set to PERFORM.
From this point on, the error data are processed in lock-step with the spectral data. Errors caused by sky and background subtraction, as well as those from flatfields and inverse sensitivity files, are not included in the error estimate. At the end of the processing, the calibrated error data will be written to the .c2h file.
31.4.3 Data Quality Initialization
The initial values of the data quality information are the data quality entries from the spacecraft as recorded in the .q0h file. This step of the processing adds values from the data quality reference files to the initial values in the .q0h file. The routine uses the data quality initialization reference file DQ1HFILE listed in the .d0h file. A second file, DQ2HFILE, is necessary for paired-aperture and spectropolarimetry observations. These reference files contain flags for intermittent noisy and dead channels (data quality values 170 and 160, respectively). The data quality values are carried along throughout the remaining processing steps where subsequent routines add values corresponding to other problem conditions. Only the highest (most severe) data quality value is retained for each pixel. At the end of the processing the final data quality values are written to the .cqh file.
The noisy and dead channels in the data quality files were often out of date, but
the dead diode table (DDTHFILE) contains the most accurate list of dead and disabled diodes. Noisy diodes are not flagged in routine processing. Normally, after
three reports of noisy activity an offending diode was disabled. As a result, diodes
that had fewer than three reports as noisy are not flagged in the data quality file.
31.4.4 Conversion to Count Rates (CNT_CORR)
At this step, the raw counts per pixel are converted to count rates by dividing by the exposure time of each pixel. Filled data (data quality = 800) are set to zero. A correction for disabled diodes is also included at this point. If the keyword DEFDDTBL in the .d0h file is set to TRUE, the list of disabled diodes is read from the unique data log (.ulh) file. Otherwise the list is read from the disabled diode reference file, DDTHFILE, named in the .d0h file. In pipeline calibration the DDTHFILE was more commonly used for the disabled diode information.
The count rate spectral data are written to the .c4h file at this point. Note that the S/N and the computed statistical errors in a given pixel are appropriate to the actually observed, not the corrected, count rate.
A unique correction is determined for each data group based on the orbital position of the spacecraft at the mid-point of the observation time for each group. While the correction is calculated to sub-pixel accuracy, it is applied as an integer value and is therefore accurate only to the nearest integral pixel. This is done to avoid resampling the data in the calibration process. Furthermore, the pipeline correction (OFF_CORR) is applied only in the x-direction (i.e., the dispersion direction).
The correction is applied by simply shifting pixel values from one array location to another. As a typical example, if the amount of the correction for a particular data group is calculated to be +2.38 pixels, the data point originally at pixel location 1 is shifted to pixel 3, pixel 2 shifted to pixel 4, pixel 3 to pixel 5, and so on. Pixel locations at the ends of the array that are left vacant by this process (e.g., pixels 1 and 2 in the example above) are set to a value of zero and are assigned a data quality value of 700.
Special handling is required for data obtained in ACCUM mode since each data frame contains the sum of all frames up to that point. In order to apply a unique correction to each frame, data taken in ACCUM mode are first unraveled into separate frames. Each frame is then corrected individually, and the corrected frames are recombined.
The pipeline processing GIM correction (OFF_CORR) is not applied to target acquisition data, image mode data, and polarimetry data. The correction can be applied to IMAGE mode spectral data by setting header keyword OFF_CORR to PERFORM prior to running calfos.
The onboard GIM correction is calculated and applied every 30 seconds, and is applied to all observations except for ACQ/PEAK observations.

Where:
If an observed background was used (rarely the case), it is first repaired; bad points (i.e., points at which the data are flagged as lost or garbled in the telemetry process) are filled by linearly interpolating between good neighbors. Next, the background is smoothed with a median filter, followed by a mean filter. The median and mean filter widths are stored in reference table CCS3. No smoothing is done to the background reference file, if used, since the file is already a smoothed approximation to the background. Spectral data at pixel locations corresponding to repaired background data are assigned a data quality value of 120. Finally, the repaired background data are subtracted.
Although this is called background subtraction, it is really a dark count -subtraction.
The routine pipeline scattered light correction is applied only for those gratings that produce spectra in which the detector had regions of zero sensitivity to dispersed light or that did not fully illuminate all the science diodes (Table 31.6). The values listed in the table apply to spectra with FCHNL=0, NCHNLS=512, NXSTEPS=4, and OVERSCAN=5, i.e., the default FOS observing mode. For the listed combinations, these dark diodes can be used to estimate the scattered light illuminating all of the diodes. For the valid combinations, the average count rate for these diodes is determined and subtracted from the whole data array, including the dark pixels. If the dark pixels were excluded from readout by the use of a restructured wavelength range, no scattered light correction is made.
The correction applied in this way is only a wavelength-independent first-order approximation. The calfos task reports (via the standard output) whether it performs this step, along with the subtracted value. Group parameter SCT_VAL gives the value subtracted from each group. This information is also provided in the paper products and, if you have a dataset from the pipeline, is in the trailer file.
Detector |
Grating |
Minimum Pixel Number |
Maximum Pixel Number |
Total Pixels |
|---|---|---|---|---|
|
Blue
|
G130H
|
31
|
130
|
100
|
|
Blue
|
G160L
|
901
|
1200
|
300
|
|
Blue
|
Prism
|
1861
|
2060
|
200
|
|
Red
|
G190H
|
2041
|
2060
|
20
|
|
Red
|
G780H
|
11
|
150
|
140
|
|
Red
|
G160L
|
601
|
900
|
300
|
|
Red
|
G650L
|
1101
|
1200
|
100
|
|
Red
|
Prism
|
1
|
900
|
900
|
Since the scattered light characteristics of the FOS are now well understood, a scattered light model is available at STScI. It is available for use as a post-observation parametric analysis tool (bspec) in STSDAS to estimate the amount of scattered light affecting a given observation (see FOS ISR 127). The amount of scattered light depends on the spectral energy distribution across the whole detector wavelength range of the object being observed and on the sensitivity of the detector. For cool objects the number of scattered light photons can dominate the dispersed spectrum in the UV. Thus, in order to model the scattered light in the FOS appropriately, the red part of the source spectrum has to be very well known. For an atlas of predicted scattered light as a function of object type and FOS disperser and additional guidelines for modeling FOS grating scatter with bspec, see FOS ISR 151.
31.4.9 Flatfield Correction (FLT_CORR)
This step removes the diode-to-diode sensitivity variations and fine structure (typically on size scales of ten diodes or less) from the object, error, and sky spectra by multiplying each by the inverse flatfield response as stored in the FL1HFILE reference file. A second flatfield file, FL2HFILE, is required for paired aperture or spectropolarimetry observations. No new data quality values are assigned in this step. 31.4.10 Sky Subtraction (SKY_CORR)
If the sky was observed, the flatfielded sky spectrum is repaired in the same fashion as described above for an observed background spectrum. The spectrum is then smoothed once with a median filter and twice with a mean filter, except in regions of known emission lines, which are masked out. The CCS2 reference table contains the pairs of starting and ending pixel positions for masking the sky emission lines. The sky spectrum is then scaled by the ratio of the object and sky aperture areas, and then shifted in pixel space (to the nearest integer pixel) so that the wavelength scales of the object and sky spectra match. The sky spectrum is then subtracted from the object spectra and the resulting sky-subtracted object spectrum is written to the .c8h file. Pixel locations in the sky-subtracted object spectrum that correspond to repaired locations in the sky spectrum are assigned a data quality value of 120.
For OBJ-SKY (or STAR-SKY) observations, half the integration time is spent on
the sky. The only science observations made of the sky were taken by mistake and
were not required for proposal science. Additionally, the CCS2 table values were
never confirmed.
Note that-especially for extended objects-paired aperture observations could be obtained in the so-called "OBJ-OBJ" mode, in which no sky subtractions were performed (see "Paired Aperture Calibration Anomaly" on page 31-25).
31.4.11 Computing the Wavelength Scale (WAV_CORR)
A vacuum wavelength scale at all wavelengths is computed for each object or sky spectrum. Wavelengths are computed using dispersion coefficients corresponding to each grating and aperture combination stored in reference table CCS6. Corrections for telescope motion or motion of the Earth are not made in the standard pipeline calibration. The computed wavelength array is written to the .c0h file.

Note that the above equations determine the wavelength at each diode. This must be converted to pixels using NXSTEPS. For example, if NXSTEPS=4, the values for x are given as 0, 0.25, 0.5, 0.75, 1, etc., for pixels 1, 2, 3, 4, 5, etc.
Each aperture affected the throughput of light onto the photocathode. To prepare the object data for absolute flux calibration, the object data must be normalized to the throughput as would be seen through a pre-determined reference aperture (the 4.3 aperture is always used). The normalization is calibrated as a second-order polynomial and is a function of wavelength. The polynomial is evaluated over the object wavelength range and divided into the object data. The coefficients are found in the CCSB reference table.
Once the object data has been normalized, the throughput is compensated for variations in sensitivity due to focus changes. The CCSA table contains a list of dates and focus values. The sensitivity variation is modeled as a function of wavelength and focus, the coefficients of which are found in the CCSC table. This model is evaluated and divided into the object data. (Although post-COSTAR focus-dependent corrections are unity, this step still must be performed for proper calibration).
AIS_CORR: This step is functionally no different than FLX_CORR except for the way in which absolute flux is calibrated. The absolute flux calibration is based on data from all calibration observation epochs. An average sensitivity function for the entire pre- or post-COSTAR period for the 4.3 reference aperture is contained in the AISHFILE reference file for each combination of detector and disperser. As necessary, TIM_CORR factors (see following sub-section) correct the sensitivity to the date of observation and APR_CORR factors (see preceding sub-section) correct for the throughput of the aperture employed. The calibrated spectral data are written to the .c1h file, and the calibrated error data are written to the .c2h file. The final data quality values are written to the .cqh file.
FLX_CORR: Now used for spectropolarimetry flux calibration only. The inverse sensitivity data are read from the IV1HFILE reference file. A second inverse sensitivity file, IV2HFILE, is required for paired-aperture or spectropolarimetry observations. Individual reference files are required for every combination of detector, disperser, and aperture. Time-dependencies are, in principle, tracked via multiple reference files with different USEAFTER dates.
For both flux calibration methods, points where the inverse sensitivity is zero (i.e., not defined) are flagged with a data quality value of 200. The calibrated spectral data are written to the .c1h file, and the calibrated error data are written to the .c2h file. The final data quality values are written to the .cqh file.
This is the final step of processing for ACCUM mode observations.
RAPID Mode: For RAPID mode, the total flux, integrated over all pixels, for each readout is computed. The sum of the statistical errors in quadrature for each frame is also propagated. The following equations are used in the computation.


POLARIMETRY Mode: For the POLARIMETRY mode, the data from individual waveplate positions are combined to calculate the Stokes I, Q, U, and V parameters, as well as the linear and circular polarizations and polarization position angle spectra (for details of calculating the Stokes parameters see FOS ISR 078). Four sets of Stokes parameter and polarization spectra are computed. The first two sets are for each of the separate pass directions, the third for the combined pass direction data, and the fourth for the combined data corrected for interference and instrumental orientation.
PERIOD Mode: For PERIOD mode, the pixel-by-pixel average of all slices (NSLICES separate memory locations) and the differences from the average for each slice of the last frame are computed. The following equations are used in the computation:




Where
stevens@stsci.edu Copyright © 1997, Association of Universities for Research in Astronomy. All rights reserved. Last updated: 01/14/98 14:47:13