[Top] [Prev] [Next] [Bottom]

31.4 Details of the FOS Pipeline Process

This section describes in detail the STScI pipeline calibration or re-calibration (calfos) procedures. Each step of the processing is selected by the values of keyword switches in the science data header file. All FOS observations undergo pipeline processing to some extent. Target acquisition and IMAGE mode data are processed only through step 6 (paired pulse correction) but are not GIM corrected. ACCUM data are processed through step 14 (absolute flux calibration) and RAPID, PERIOD, and POLARIMETRY data are processed through step 15 (special mode processing). The steps in the FOS calibration process are:

  1. Read the raw data.
  2. Calculate statistical errors (ERR_CORR).
  3. Initialize data quality.
  4. Convert to count rates including dead diode correction (CNT_CORR).
  5. Perform GIM correction (OFF_CORR).
  6. Do paired-pulse correction (PPC_CORR).
  7. Subtract background (BAC_CORR).
  8. Subtract scattered light (SCT_CORR).
  9. Do flatfield correction (FLT_CORR).
  10. Subtract sky (SKY_CORR).
  11. Correct aperture throughput and focus effects (APR_CORR).
  12. Compute wavelengths (WAV_CORR).
  13. Correct time-dependent sensitivity variations (TIM_CORR).
  14. Perform absolute calibration (FLX_CORR); superseded by AIS_CORR.
  15. Do special mode processing (MOD_CORR) if RAPID, PERIOD, or spectropolarimetry. These steps are described in detail in the following sections. A basic flowchart is provided in Figure 31.1.


Note: For non-polarimetry cases use only AIS_CORR; if both are set to PERFORM, AIS_CORR overrides FLX_CORR as a safeguard. For polarimetry use FLX_CORR; here it will override AIS_CORR should both be set to PERFORM.

31.4.1 Reading the Raw Data

The raw data, stored in the .d0h file, are the starting point of the pipeline data reduction and calibration process. The raw science data are read from the .d0h file and the initial data quality information is read from the .q0h file. If science trailer (.d1h) and trailer data quality (.q1h) files exist, these are also read at this time.

31.4.2 Calculating Statistical Errors (ERR_CORR)

The noise in the raw data is photon (Poisson) noise and errors are estimated by simply calculating the square root of the raw counts per pixel. An error value of zero is assigned to filled data, i.e., pixels that have a data quality value of 800.1 For all observing modes except polarimetry, an error value of zero is assigned to pixels that have zero raw counts. Polarimetry data that have zero raw counts are assigned an error value of one.

From this point on, the error data are processed in lock-step with the spectral data. Errors caused by sky and background subtraction, as well as those from flatfields and inverse sensitivity files, are not included in the error estimate. At the end of the processing, the calibrated error data will be written to the .c2h file.

31.4.3 Data Quality Initialization

The initial values of the data quality information are the data quality entries from the spacecraft as recorded in the .q0h file. This step of the processing adds values from the data quality reference files to the initial values in the .q0h file. The routine uses the data quality initialization reference file DQ1HFILE listed in the .d0h file. A second file, DQ2HFILE, is necessary for paired-aperture and spectropolarimetry observations. These reference files contain flags for intermittent noisy and dead channels (data quality values 170 and 160, respectively). The data quality values are carried along throughout the remaining processing steps where subsequent routines add values corresponding to other problem conditions. Only the highest (most severe) data quality value is retained for each pixel. At the end of the processing the final data quality values are written to the .cqh file.


The noisy and dead channels in the data quality files were often out of date, but the dead diode table (DDTHFILE) contains the most accurate list of dead and disabled diodes. Noisy diodes are not flagged in routine processing. Normally, after three reports of noisy activity an offending diode was disabled. As a result, diodes that had fewer than three reports as noisy are not flagged in the data quality file.

31.4.4 Conversion to Count Rates (CNT_CORR)

At this step, the raw counts per pixel are converted to count rates by dividing by the exposure time of each pixel. Filled data (data quality = 800) are set to zero. A correction for disabled diodes is also included at this point. If the keyword DEFDDTBL in the .d0h file is set to TRUE, the list of disabled diodes is read from the unique data log (.ulh) file. Otherwise the list is read from the disabled diode reference file, DDTHFILE, named in the .d0h file. In pipeline calibration the DDTHFILE was more commonly used for the disabled diode information.

For re-calibration purposes, DEFDDTBL should always be set to FALSE so that the FOS closeout calibration dead diode tables are used for the proper dead diode correction.

The actual process by which the correction for dead diodes is accomplished is as follows. First, recall that because of the use of the OVERSCAN function, each pixel in the observed spectrum actually contains contributions from several neighboring diodes (see "Data Acquisition Fundamentals" on page 29-15). Therefore, if one or more particular diodes out of the group that contributed to a given output pixel is dead or disabled, there will still be some amount of signal due to the contribution of the remaining live diodes in the group. We can correct the observed signal in that pixel back to the level it would have had if all diodes were live; to do this, we divide by the relative fraction of live diodes. The corrected pixel value is zero if all the diodes that contribute to that pixel are dead or disabled, otherwise, the value is given by the equation:

Where:

31.4.5 GIM Correction (OFF_CORR)

Data obtained prior to April 5, 1993, do not have an onboard geomagnetic-induced image motion (GIM) correction applied, and therefore require a correction for GIM in the pipeline calibration. Note that there are some observations obtained after April 5, 1993, that do not have onboard GIM correction, because the application of the onboard GIM correction depended on when the proposal was completely processed for scheduling on the spacecraft. Reference to keywords YFGIMPEN and YFGIMPER, respectively, indicate whether onboard GIM correction was enabled and whether any error occurred in its implementation. The GIM correction is determined by scaling a model of the strength of the geomagnetic field at the location of the spacecraft. The model scale factors are read from the CCS7 reference table. The correction is applied to the spectral data, the error data, and the data quality values.

A unique correction is determined for each data group based on the orbital position of the spacecraft at the mid-point of the observation time for each group. While the correction is calculated to sub-pixel accuracy, it is applied as an integer value and is therefore accurate only to the nearest integral pixel. This is done to avoid resampling the data in the calibration process. Furthermore, the pipeline correction (OFF_CORR) is applied only in the x-direction (i.e., the dispersion direction).

The correction is applied by simply shifting pixel values from one array location to another. As a typical example, if the amount of the correction for a particular data group is calculated to be +2.38 pixels, the data point originally at pixel location 1 is shifted to pixel 3, pixel 2 shifted to pixel 4, pixel 3 to pixel 5, and so on. Pixel locations at the ends of the array that are left vacant by this process (e.g., pixels 1 and 2 in the example above) are set to a value of zero and are assigned a data quality value of 700.

Special handling is required for data obtained in ACCUM mode since each data frame contains the sum of all frames up to that point. In order to apply a unique correction to each frame, data taken in ACCUM mode are first unraveled into separate frames. Each frame is then corrected individually, and the corrected frames are recombined.


The pipeline processing GIM correction (OFF_CORR) is not applied to target acquisition data, image mode data, and polarimetry data. The correction can be applied to IMAGE mode spectral data by setting header keyword OFF_CORR to PERFORM prior to running calfos.

The onboard GIM correction is applied on a much finer grid than integral pixels and is made within the FOS so that data are recorded by the detector with the corrections already included. The onboard GIM correction is applied along both the direction of the diode array and in the perpendicular direction. In the x-direction the onboard GIM correction is applied in units of 1/32 of the width of the diodes and in the y-direction in units of 1/256 of the diode height.


The onboard GIM correction is calculated and applied every 30 seconds, and is applied to all observations except for ACQ/PEAK observations.

31.4.6 Paired Pulse Correction (PPC_CORR)

This step corrects the data for saturation in the detector electronics. The dead time constants q0, q1, and F are read from the reference table CCG2. The values of these dead time constants in the CCG2 table are q0 = 9.62e-6 seconds, q1 = 1.826e-10 sec2/counts, and F = 52,000 counts per second. These quantities were determined in laboratory measurements prior to launch and were never modified (FOS ISRs 25 and 45). The following equation is used to estimate the true count rate:

Where:

31.4.7 Background Subtraction (BAC_CORR)

This step subtracts the background (i.e., the particle-induced dark count) from object and sky (if present) spectra. If no background spectrum was obtained with the observation (the situation for nearly all FOS exposures), a default background reference file, BACHFILE, which is scaled to a mean expected count rate based on the geomagnetic position of the spacecraft at the time of the observation, is used. The scaling parameters are stored in the reference table CCS8. The scaled background reference spectrum is written to the .c7h file for later examination.

If an observed background was used (rarely the case), it is first repaired; bad points (i.e., points at which the data are flagged as lost or garbled in the telemetry process) are filled by linearly interpolating between good neighbors. Next, the background is smoothed with a median filter, followed by a mean filter. The median and mean filter widths are stored in reference table CCS3. No smoothing is done to the background reference file, if used, since the file is already a smoothed approximation to the background. Spectral data at pixel locations corresponding to repaired background data are assigned a data quality value of 120. Finally, the repaired background data are subtracted.


Although this is called background subtraction, it is really a dark count -subtraction.

31.4.8 Scattered Light Correction (SCT_CORR)

Scattered light observed in FOS data is produced by the diffraction patterns of the FOS gratings, the entrance apertures, and the micro-roughness of the gratings.

The routine pipeline scattered light correction is applied only for those gratings that produce spectra in which the detector had regions of zero sensitivity to dispersed light or that did not fully illuminate all the science diodes (Table 31.6). The values listed in the table apply to spectra with FCHNL=0, NCHNLS=512, NXSTEPS=4, and OVERSCAN=5, i.e., the default FOS observing mode. For the listed combinations, these dark diodes can be used to estimate the scattered light illuminating all of the diodes. For the valid combinations, the average count rate for these diodes is determined and subtracted from the whole data array, including the dark pixels. If the dark pixels were excluded from readout by the use of a restructured wavelength range, no scattered light correction is made.

The correction applied in this way is only a wavelength-independent first-order approximation. The calfos task reports (via the standard output) whether it performs this step, along with the subtracted value. Group parameter SCT_VAL gives the value subtracted from each group. This information is also provided in the paper products and, if you have a dataset from the pipeline, is in the trailer file.

For details of the correction please see FOS ISR 103. Note that the scattered light correction is in addition to the background subtraction. Effectively, the scattered light correction serves to remove residual background, as well.


Regions Used for -Scattered Light -Subtraction

Detector

Grating

Minimum Pixel Number

Maximum Pixel Number

Total Pixels

Blue

G130H

31

130

100

Blue

G160L

901

1200

300

Blue

Prism

1861

2060

200

Red

G190H

2041

2060

20

Red

G780H

11

150

140

Red

G160L

601

900

300

Red

G650L

1101

1200

100

Red

Prism

1

900

900

Since the scattered light characteristics of the FOS are now well understood, a scattered light model is available at STScI. It is available for use as a post-observation parametric analysis tool (bspec) in STSDAS to estimate the amount of scattered light affecting a given observation (see FOS ISR 127). The amount of scattered light depends on the spectral energy distribution across the whole detector wavelength range of the object being observed and on the sensitivity of the detector. For cool objects the number of scattered light photons can dominate the dispersed spectrum in the UV. Thus, in order to model the scattered light in the FOS appropriately, the red part of the source spectrum has to be very well known. For an atlas of predicted scattered light as a function of object type and FOS disperser and additional guidelines for modeling FOS grating scatter with bspec, see FOS ISR 151.

31.4.9 Flatfield Correction (FLT_CORR)

This step removes the diode-to-diode sensitivity variations and fine structure (typically on size scales of ten diodes or less) from the object, error, and sky spectra by multiplying each by the inverse flatfield response as stored in the FL1HFILE reference file. A second flatfield file, FL2HFILE, is required for paired aperture or spectropolarimetry observations. No new data quality values are assigned in this step.

Different locations on the FOS photocathodes displayed quite different flatfield characteristics so that FOS flats were aperture-dependent. Additionally, FOS flatfields for some dispersers were quite time-variable. Care must be taken to use the correct flatfield reference file for the date and aperture of observation.

31.4.10 Sky Subtraction (SKY_CORR)

If the sky was observed, the flatfielded sky spectrum is repaired in the same fashion as described above for an observed background spectrum. The spectrum is then smoothed once with a median filter and twice with a mean filter, except in regions of known emission lines, which are masked out. The CCS2 reference table contains the pairs of starting and ending pixel positions for masking the sky emission lines. The sky spectrum is then scaled by the ratio of the object and sky aperture areas, and then shifted in pixel space (to the nearest integer pixel) so that the wavelength scales of the object and sky spectra match. The sky spectrum is then subtracted from the object spectra and the resulting sky-subtracted object spectrum is written to the .c8h file. Pixel locations in the sky-subtracted object spectrum that correspond to repaired locations in the sky spectrum are assigned a data quality value of 120.

This routine requires table CCS3 containing the filter widths, the aperture size table CCS0, the emission line position table CCS2, and the sky shift table CCS5.


For OBJ-SKY (or STAR-SKY) observations, half the integration time is spent on the sky. The only science observations made of the sky were taken by mistake and were not required for proposal science. Additionally, the CCS2 table values were never confirmed.

Note that-especially for extended objects-paired aperture observations could be obtained in the so-called "OBJ-OBJ" mode, in which no sky subtractions were performed (see "Paired Aperture Calibration Anomaly" on page 31-25).

31.4.11 Computing the Wavelength Scale (WAV_CORR)

A vacuum wavelength scale at all wavelengths is computed for each object or sky spectrum. Wavelengths are computed using dispersion coefficients corresponding to each grating and aperture combination stored in reference table CCS6. Corrections for telescope motion or motion of the Earth are not made in the standard pipeline calibration. The computed wavelength array is written to the .c0h file.

For the gratings the wavelengths are computed as follows:

For the prism, wavelengths are computed as:

Where:


Note that the above equations determine the wavelength at each diode. This must be converted to pixels using NXSTEPS. For example, if NXSTEPS=4, the values for x are given as 0, 0.25, 0.5, 0.75, 1, etc., for pixels 1, 2, 3, 4, 5, etc.

For multigroup data, as in either rapid-readout or spectropolarimetry mode, there are separate wavelength calculations for each group. These wavelengths may be identical or slightly offset, depending on the observation mode.

31.4.12 Aperture Throughput Correction (APR_CORR)

This calibration step consists of two parts: normalizing throughputs to a reference aperture and correcting throughputs for focus changes. Both steps are relevant only if the average inverse sensitivity files are used, see AIS_CORR in the next sub-section.

Each aperture affected the throughput of light onto the photocathode. To prepare the object data for absolute flux calibration, the object data must be normalized to the throughput as would be seen through a pre-determined reference aperture (the 4.3 aperture is always used). The normalization is calibrated as a second-order polynomial and is a function of wavelength. The polynomial is evaluated over the object wavelength range and divided into the object data. The coefficients are found in the CCSB reference table.

Once the object data has been normalized, the throughput is compensated for variations in sensitivity due to focus changes. The CCSA table contains a list of dates and focus values. The sensitivity variation is modeled as a function of wavelength and focus, the coefficients of which are found in the CCSC table. This model is evaluated and divided into the object data. (Although post-COSTAR focus-dependent corrections are unity, this step still must be performed for proper calibration).

31.4.13 Absolute Flux Calibration (AIS_CORR and FLX_CORR)

This step multiplies object (and error) spectra by the appropriate inverse sensitivity vector to convert from count rates per pixel to absolute flux units (erg s-1 cm-2 Å-1). Two different methods of performing this calibration were used. The pipeline used the so-called FLX_CORR method from the time of HST launch to March 19, 1996. The pipeline processing method, for non-polarimetric observations, was changed to the AIS_CORR method on March 19, 1996. AIS_CORR calibration files are available for all FOS observing epochs and AIS_CORR is the only recommended method for the flux calibration (or re-calibration) of non-polarimetric FOS observations. On the other hand, spectropolarimetric measures will continue to be processed via the FLX_CORR method.

AIS_CORR: This step is functionally no different than FLX_CORR except for the way in which absolute flux is calibrated. The absolute flux calibration is based on data from all calibration observation epochs. An average sensitivity function for the entire pre- or post-COSTAR period for the 4.3 reference aperture is contained in the AISHFILE reference file for each combination of detector and disperser. As necessary, TIM_CORR factors (see following sub-section) correct the sensitivity to the date of observation and APR_CORR factors (see preceding sub-section) correct for the throughput of the aperture employed. The calibrated spectral data are written to the .c1h file, and the calibrated error data are written to the .c2h file. The final data quality values are written to the .cqh file.

FLX_CORR: Now used for spectropolarimetry flux calibration only. The inverse sensitivity data are read from the IV1HFILE reference file. A second inverse sensitivity file, IV2HFILE, is required for paired-aperture or spectropolarimetry observations. Individual reference files are required for every combination of detector, disperser, and aperture. Time-dependencies are, in principle, tracked via multiple reference files with different USEAFTER dates.

For both flux calibration methods, points where the inverse sensitivity is zero (i.e., not defined) are flagged with a data quality value of 200. The calibrated spectral data are written to the .c1h file, and the calibrated error data are written to the .c2h file. The final data quality values are written to the .cqh file.

31.4.14 Time Correction (TIM_CORR)

This step corrects the absolute flux for variations in sensitivity of the instrument over time and is an important component of the AIS_CORR flux calibration. The correction factor is a function of time and wavelength. The factor is calculated by linear interpolation for the observation time and wavelength coverage. The factor is divided into the object absolute flux. The coefficients are found in table CCSD. TIM_CORR is used only with AIS_CORR calibration.

This is the final step of processing for ACCUM mode observations.

31.4.15 Special Mode Processing (MOD_CORR)

Data acquired in the rapid-readout, time-resolved, or spectropolarimetry modes receive specialized processing in this step. All data resulting from this additional processing are stored in the .c3h file. See the discussions of output data products for each of these modes on pages 30-40, 30-42, and 30-46.

RAPID Mode: For RAPID mode, the total flux, integrated over all pixels, for each readout is computed. The sum of the statistical errors in quadrature for each frame is also propagated. The following equations are used in the computation.

Where:



[Top] [Prev] [Next] [Bottom]

1 Data quality values are described in Table 30.2.

stevens@stsci.edu
Copyright © 1997, Association of Universities for Research in Astronomy. All rights reserved. Last updated: 01/14/98 14:47:13