[Top] [Prev] [Next] [Bottom]

37.2 Spectrum Level and the Sensitivity -Function

The sensitivity functions quantify the relationship between the observed flux from a point source and the count rate detected by the GHRS. For calibration purposes the fluxes of the reference stars are expressed in the cgs units traditional to astronomy: ergs cm-2 sec-1 Å-1. Raw GHRS data have units of counts per diode per second. The sensitivity functions simply show the ratio of these quantities, with no other constants, scale factors or transformations included. The post-COSTAR sensitivity functions for all GHRS gratings are listed in Chapter 8 of the GHRS Instrument Handbook1. (See GHRS ISR 060 for the pre-COSTAR sensitivity functions.) For planning purposes, a known or estimated flux can be multiplied by the sensitivity to estimate what the GHRS count rate will be for a particular grating. During data reduction, an observed count rate can be divided by the sensitivity function to calibrate the data in flux units by setting the value of the FLX_CORR switch to PERFORM in the .d0h header.

The sensitivity functions depend on several factors. The telescope contributes its unobscured geometrical collecting area, the reflectivity of both mirrors, and the fraction of the light which manages to pass through the instrument's entrance aperture. The GHRS optics introduce a finite reflectivity at each surface and transmission at each filter and window, the blaze efficiency and linear dispersion of the gratings. The detectors have an overall quantum efficiency (QE) at each wavelength, spatial gradients related to vignetting or real QE variations, isolated scratches and blemishes, pixel-scale irregularities, finite sampling by the diodes, and diode-to-diode gain variations. As we noted, to simplify this problem, the calibration is broken into several components. The basic function relates flux and count rate measured at the center of the diode array, for a star centered in the LSA, at a range of wavelengths for each grating. The echelle blaze function is quantified separately for each order. Gradients of sensitivity across the diode array are described by vignetting functions which vary with wavelength for each grating or echelle order. The SSA throughput is measured relative to the LSA at several wavelengths, and is assumed to be independent of grating mode. Blemishes are tabulated as departures from the local sensitivity for each grating. Diode response functions are tabulated as detector properties, which depend on threshold settings, but not optical modes. Finally, the pixel to pixel granularity can be identified and suppressed as photometric noise.

Each of these factors that affects the flux measurement has some uncertainty, which we will discuss below.

37.2.1 Standard Stars

The stars used for flux references are members of the set of standards established and maintained by STScI. Targets for specific observations are selected primarily on the basis of UV brightness and nonvariability, since we want to obtain count rates high enough to achieve adequate S/N (>50) in short exposure times.

We used the ultraviolet standard stars BD+28D4211 and µColumbae as our primary flux references. We have also used BD+75D325 and AGK+81D266 on occasion. None of these are known to exhibit significant variability in the ultraviolet. BD+28D4211 is a hot white dwarf that was used for the sensitivity monitors for the low- and medium-dispersion gratings. The corresponding observations for the echelle, and other long-term monitoring programs, were done with the bright late-O super giant µ Col. An example of a Cycle 4 BD+28D4211 observation is in Figure 37.1.

Sensitivity determination is done by comparing an observed spectrum to the reference spectrum for that star. The reference spectrum is the current best estimate of the true flux versus wavelength for a particular star, and therefore represents "truth." The flux scale for the GHRS and other HST instruments has been modified such that observations of the star G191B2B match theoretical models.

37.2.2 Procedures for Determining Sensitivity and Vignetting Functions

A series of observations for the characterization of the post-COSTAR GHRS was made shortly after SMOV, in 1994. A similar series was done early after launch for the pre-COSTAR instrument. To illustrate how these observations were used to determine sensitivity and vignetting, we borrow from GHRS ISR 085, which discusses G140L sensitivity.

If it were possible to record the full useful wavelength range of a grating in a single exposure, then it would also be possible to determine a single "sensitivity function" for that grating. That function, which we will denote by S, would have units of flux per count rate2, or, more physically, S is in units of (erg cm-2 s-1 Å-1) per (counts s-1 diode-1), where knowledge of the instrument's properties indicates the appropriate wavelength at a given diode. We will denote the flux by F and the count rate by C, so that S = F/C.

The work would be much easier if we could observe a "perfect" star, by which we mean one with a flat or nearly-flat spectrum which is not variable in time and which is largely free of any structure (such as absorption lines). We also wish we had a "perfect" detector to work with, which would be one with a flat response across its face, that response being independent of wavelength, spatial position, or time.

Real stars, in particular those used as standards for UV flux calibration, have many spectrum features, and those lie at astrophysically-critical wavelengths. The biggest of these, like Lyman-, are "potholes" in that they must be worked around carefully. There are also weaker features-the "barbs"-that make it difficult to divide one spectrum by another. Real detectors, like those in the GHRS, have response functions that vary with wavelength, and, to some degree, with time. What is particularly difficult to treat is the very steep decline in sensitivity below Lyman-.

The gratings of the GHRS can be positioned to almost any wavelength within their nominal ranges. When this is done, other effects must be taken into account. In particular, GHRS spectra have a vignetting correction applied after the initial sensitivity calibration. This vignetting is only partially so in the classic optical sense, and, in fact, includes several effects that lead to variations in the spectrum of a few percent over scales of tens of pixels. "Sensitivity," on the other hand, is meant to refer to the gross dependence of observed count rate on stellar flux. Thus "sensitivity" should not depend on how a spectrum is placed on the detector; those spatially-dependent effects come under "vignetting." Variations on even finer scales also occur and have to do with diode-to-diode gain variations, granularity, etc. Those small-scale (one to a few diodes in scale) variations will not be treated here.

The underlying concepts used to determine S and separate it from V, the vignetting function are simple:

  1. Observe a standard star over the full useful range of the grating, ensuring that any given wavelength is observed twice by stepping the grating by half its bandwidth. This produces the observed standard star spectrum, in count rate units, C(O), as a function of wavelength.
  2. Plot these overlapping spectra in the units they are observed (count rate) versus wavelength. The central region of each spectrum should form an upper envelope to what is seen.
  3. Compare this upper envelope to a reference flux spectrum for the standard star, F(R). The reference spectrum is the official version of what the flux at each wavelength of the standard star is supposed to be.
  4. The initial estimate of sensitivity is then S1 = F(R)/C(O).
  5. Use this initial estimate S1 to derive deviations for each individual spectrum. These are V1, determined as a function of position on the detector photocathode, the detector being the presumed source of the variations ascribed to vignetting.
  6. Iterate S and V until satisfactory closure occurs, to get Sfinal and Vfinal. The goal is to determine vignetting to within about 1%. In practice, of course, determining S and V is not so simple, and involves some estimates and compromises. For example, we accepted the existing CDBS G140L vignetting files as final versions and did not rederive the vignetting: the residuals in overlapping areas of the spectra were within acceptable limits.

    Figure 37.1 shows the first step in this process. The upper frame shows the observed spectrum, C(O), of standard star BD+28 in units of count rate, while the bottom frame shows the reference spectrum, F(R). Please note that this reference spectrum already takes into account the proper white dwarf flux scale, and that no secondary correction is needed.

Problems and Limitations

The procedure outlined above quickly produces satisfactory results. There remain, however, some difficult nagging problems.

Far-UV Reference Fluxes

The existing reference spectrum of our standard star, BD+28°4211 is based on FOS observations, and are on the fundamental UV scale of G191-B2B. However, the GHRS can observe further into the UV than can the FOS, which means that we lack a reference spectrum below about 1140 Å.

Lyman- and Other Major Features

It is difficult to determine the calibration in the region of Lyman- because it is such a large feature. Also, Lyman- lies on a portion of the observed spectrum with a steep slope (Figure 37.1). Lyman- is very broad, and, moreover, there is little spectrum left shortward of Lyman-, and the sensitivity there is declining rapidly. As a result the intrinsic uncertainty in fluxes in the region of Lyman- is higher than at other wavelengths.

Various features show up in all parts of the spectrum of the standard star, but they are moderate in effect, making it possible to form a reliable estimate of the spectrum. Recall that the reference spectrum is defined in terms of the total stellar flux within some bandpass, meaning that it is not the flux in the continuum. The reference spectrum has its origins in FOS observations, at resolution lower than those of the GHRS. Therefore we interpolate both spectra to get the resolution of the GHRS spectra to a level similar to that of the reference spectrum.

Small Science Aperture (SSA)

The amount of light seen through the Small Science Aperture (SSA) is sensitive to the centering of the object in the SSA. The baseline SSA sensitivity curve was created by multiplying the baseline LSA sensitivity curve by the SSA to LSA ratio. The process used is discussed in greater detail later in this document.

Effects of Time

Ratios of our regular sensitivity monitoring data to the baseline SMOV data show changes in GHRS Side 1 sensitivity over time since the installation of COSTAR. Each time the monitor was run, the current data was compared to the SMOV BD+28D4211 data at the same wavelength. A ratio and errors were calculated every 10 Å from ~1100 Å to ~1630 Å. While the sensitivity below Lyman- decreased, an apparent increase occurred in sensitivity from ~1200 to 1350 Å, before it declined again. We do not understand this behavior, but it has remained fairly constant with time.

In addition to rederiving the baseline post-COSTAR sensitivity, we have also created time-dependent curves for the date of each sensitivity monitor based on the ratio of the counts from the monitor to the SMOV baseline data. Observers will need to interpolate between sensitivity curves to get a correction appropriate for the date of their observations.

37.2.3 Sensitivity Monitoring

Sensitivity monitoring was done for Side 1 and Side 2 separately. The Side 1 post-COSTAR monitor contained a series of visits of the ultraviolet standard BD+28D4211, done with identical instrumental configuration each time, except that the exposure times were increased at later dates to achieve better signal to noise. The target was acquired into the Large Science Aperture with a 5 x 5 spiral search using mirror N1, followed by a peak-up. The science observations were done with grating G140L in the ACCUM mode at two central wavelengths: 1200 Å and 1500 Å. For the Side 2 observations, BD+28D4211 was acquired into the LSA with a 3 x 3 spiral search using mirror N2, followed by a peak-up. Centering was confirmed by taking an image with the LSA. A series of spectra in the ACCUM mode were taken with gratings G160M (centered at 1200 and 1500 Å), G200M (2000 Å), and G270M (2500 and 3000 Å). This sequence was repeated approximately every three months.

These monitoring programs have shown that prior to COSTAR and the 1993 Servicing Mission, the GHRS was stable in sensitivity, with no perceptible changes. However, after the Servicing Mission we could see distinct declines in sensitivity, especially on Side 1 below Lyman-. These changes were suspected to be due to contamination on the COSTAR mirrors for the GHRS, and a special measurement was planned to occur just before the second Servicing Mission to verify this. Unfortunately, the GHRS experienced a catastrophic failure one week before SM2 so that these measurements were never done.

In any case, the declines in the post-COSTAR sensitivity of the GHRS are well-characterized. Typical errors in the ratios are less than 1% for wavelengths above 1200 Å and around 1 to 2% for wavelengths below 1200 Å (up to 3 to 4% on the 1100 Å point at the earliest dates with the least exposure time). Provision has been made for the decline by providing calibration reference files that apply to specific time periods. These time periods are three months in length, which is short enough that significant changes did not occur on shorter time scales. The difference between successive sensitivity files is listed in Table 37.2 below.

Differences between Successive G140L Time Dependent -Sensitivities






June 14, 1994





August 1, 1994





October 21, 1994





January 14, 1995





April 17, 1995





June 25, 1995





September 17, 1995





January 4, 1996





May 2, 1996





August 30, 1996





November 22, 1996





January 24, 1997





Figure 37.2 shows the Side 1 sensitivity decline and Figure 37.3 shows the decline for Side 2. The Side 1 figure represents fits to the sensitivity monitor ratios for grating G140L. Illustrated are cubic-spline fits to the ratios of an observed spectrum to the one observed during SMOV. These fits are the basis for the time-variable G140L sensitivity files. Figure 37.4 and Figure 37.5 show details of the time variability for the two worst wavelength regions.

Figure 37.2: Side 1 Sensitivity Decline

Figure 37.3: Side 2 Sensitivity Since COSTAR

In Figure 37.3, the five panels are for central wavelengths of 1200 Å, 1500 Å, 2000 Å, 2500 Å, and 3000 Å. Each point represents the ratio of the median counts measured over 20 Å relative to the counts measured on 30 April 1994 over the same 20 Å (the first data point). The error on individual data points is 1%. Time is represented in days using the date we consider COSTAR to have aligned and focussed for GHRS (February 4, 1994) as the zero-point. The vertical dashed lines represent one-year intervals.

An example of the improvement possible from using the time-dependent files is shown in Figure 37.6. In this figure, the top plot is flux-corrected with an appropriate time-variable sensitivity file; the bottom plot is the same data calibrated by the pipeline (PODPS).

Figure 37.4: Time Variability for GHRS G140L Below Ly-

Figure 37.5: GHRS G140L Change in Sensitivity Monitor Ratio

Figure 37.6: Ratio of Monitor Data to Reference Star

37.2.4 Calibrated Flux Quality

Absolute and Relative Fluxes

The foregoing discussion of the process used to create the components of the flux calibration illustrates some of the factors that influence the quality and reproducibility of flux calibrations in GHRS observations. You want to know how precise and accurate the flux values are, and that varies by -situation.

For example, if the same point-source object was observed repeatedly at the same wavelength with the same grating, then many variables are removed. This situation is essentially that for BD+28 in the monitoring program. Figure 37.3, for example, shows that once the long-term trends are removed, Side 2 LSA fluxes for this best-case scenario are reproducible to within about 1%. The primary source of this uncertainty is error in positioning a star in the LSA. The uncertainty with the SSA will be significantly larger because the throughput of the SSA depends critically on centering the star, while the LSA is much less sensitive to that.

Now suppose that the same instrumental setup is used throughout (grating and wavelength do not change) but that different stars are being compared. If the stars have similar spectra, then the previous situation pertains. If the spectra differ significantly, then the convolution of that spectrum with the sensitivity and vignetting functions will introduce additional uncertainty. Near the center of the spectrum these effects will be minimal and the intercomparability should be to within 1 to 2%.

The same kind of additional uncertainty arises if the same or different objects are being compared but in a situation where the grating used or central wavelength are different. For example, one star may have been observed with G140L and the other with G140M, or perhaps G140L was used for both, but at different settings. In this instance some of the shape effects we described will apply-in the worst cases there can be uncertainty of about 4%, but 2 to 3% is more typical.

In other cases, you might want to know the quality of the flux on an absolute basis, perhaps to compare to models. This uncertainty is impossible to measure fully, but a comparison of observations from different instruments at different times indicates that fluxes on an absolute scale are reliable to about 5%. The absolute flux scale used by the GHRS is tied to the system of STScI observatory standards. In May 1994, we switched over to the new, revised absolute flux scale established from observations of the white dwarf G191-B2B (see GHRS ISR 062). The new scale differs from the old by up to 10%, depending on wavelength. Archived data obtained prior to May 1994 have not been recalibrated with the new flux scale. Therefore, spectra of the same star calibrated before and after May 1994 are on fundamentally different flux scales; recalibrating pre-May 1994 data with the latest GHRS reference files will produce calibrated data on the white dwarf flux scale. Bohlin et al. have compared FOS observations of white dwarfs to models and find consistency to within 2%. They also estimate that various systematic effects may lead to an overall error in absolute fluxes of about 5%, as we noted, but these errors are common to all the systems, meaning that the uncertainty with which HST fluxes can be compared to, say, IUE fluxes, is much lower and is comparable to comparing just HST fluxes.

The absolute throughput of the LSA is not well known. We estimate, based on models of the point spread function, that approximately 90 to 95% of the light of a point source that reaches the GHRS focal plane is encompassed by the LSA.

The relative throughput of the SSA with respect to the LSA was determined before and after the installation of COSTAR (see GHRS ISR 062). Post-COSTAR values are shown in Figure 37.7. LSA throughput is assumed to be 100% in this figure. The relative throughput of the SSA is wavelength dependent, with higher values measured at longer wavelengths. The pipeline automatically corrects for the point source differential aperture throughput.

In Figure 37.7, the circles are observations of µ Col-all five data points are based on a single SSA ACQ/PEAKUP. The crosses are for Lup; the first and last points are from a single ACQ/PEAKUP and the cluster of three points near 1950 Å is from another. The diamonds are for AGK+81D266; each point is based on an individual ACQ/PEAKUP. The solid line is a straight line fit to the µ Col and Lup data.

Figure 37.7: Ratio of Count Rates for Post-COSTAR SSA to LSA

Sensitivity Changes Over Time-Scales of Months to Years

We noted earlier the evidence for changes in GHRS sensitivity with time since the first servicing mission. Corrections for the effects of these changes have been incorporated into CDBS so that you should get back the appropriate sensitivity reference file for the time an individual observation was taken.

Side 1

The largest effects are seen for grating G140L, especially below Lyman-. From the monitoring data, a ratio and errors were calculated every 10 Å from ~1100 Å to ~1630 Å. While the sensitivity below Lyman- decreased, an apparent increase occurred in sensitivity from ~1200 to 1350 Å, before it declined again. We do not understand this behavior, but it has remained fairly constant with time. In addition to rederiving the baseline post-COSTAR G140L sensitivity, we have also created time-dependent G140L curves for the date of each sensitivity monitor based on the ratio of the counts from the monitor to the SMOV baseline data. Observers will need to interpolate between sensitivity curves to get a correction appropriate for the date of their observations. The derivation of these Side 1 changes is described in GHRS ISR 085.

Side 2

The results of the Side 2 GHRS medium-resolution sensitivity monitor suggest that since COSTAR was installed, the GHRS sensitivity changes between 1200 Å and 3000 Å do not exceed about 5%. We find evidence for a time-dependence of the sensitivity with a decline rate of about 2% per year. As an example we showed in Figure. 37.3 count rate ratios of BD+28D4211 obtained since COSTAR installation, focus, and alignment and referenced to the beginning of Cycle 4. (Details for the Cycle 4 observations are in GHRS ISR 071). The sensitivity files used by calhrs reflect the state at the beginning of Cycle 4. The changes seen for Side 2 are described in GHRS ISR 089.

Note that the calibrated science data in the .c1h file take into account the different throughput for point sources of the LSA and SSA before and after the installation of COSTAR. Therefore a star observed before and after the installation of COSTAR will have the same flux although its count rate will be lower before COSTAR.

Decreasing Counts During an Orbit

A series of short-exposure spectra of a star over many orbits in ACCUM mode appeared to show a regular decline of the observed counts of roughly 10% over the course of each orbit. This is described in GHRS ISR 073, together with some possible explanations. The best guess is that this phenomenon is due to telescope "breathing." This effect can contribute to flux uncertainty, obviously.

Correction for Extended Sources

When calhrs photometrically calibrates your observations, it assumes you have observed a point source, and adjusts the flux in your spectrum to account for light loss due to the PSF outside of the aperture, i.e., it returns the flux you would have seen if all of the flux from your point source fell within the aperture. Therefore, the absolute fluxes of point sources measured through the LSA and SSA should be the same. Of course, the count rates will be lower for the SSA observation but calhrs will automatically apply a different sensitivity function to the SSA observation to account for the light loss. The properties of the GHRS apertures are presented in Table 37.3.

calhrs always assumes a point source is observed and it effectively applies a correction factor for the light lost outside the aperture. If you observed an extended source, then your source does not fill the aperture as does a point source and the flux calibration from calhrs will be inappropriate. To obtain a rough estimate of the specific intensity multiply the observed flux by 0.95±0.02 for observations taken through the LSA and divide by the area of the aperture in square arcseconds. This assumes that the extended source completely and evenly fills the aperture. For pre-COSTAR observations, the correction factor is 0.725 (see GHRS ISR 061 for details).

The absolute fluxes for extended sources obtained with calhrs are incorrect.

Properties of GHRS Apertures


Clear Aperture (mm)






2.0 arcsec

1.74 arcsec




0.25 arcsec

0.22 arcsec


Correction for Background Counts

The background level, or dark current for both GHRS detectors was very low: typically about 0.01 counts per second per diode when well away from the South Atlantic Anomaly. However, for very faint objects the dark level could dominate the signal, and accurate correction for the background is vital.

Because of this, some provision was made in the GHRS commanding software for features that would allow for lower net noise rates compared to standard observing modes. One of these modes used the CENSOR option, and the other used a parameter called FLYLIM. These will not be detailed here as they were rarely used. They are described in the GHRS Instrument Handbook.

For archival data there are several options for correcting for background. These were described in the previous chapter (see "Calibration Steps Explained" on page 36-2). Investigate this issue carefully before choosing the method, especially if your target was faint. You may wish to consider how many counts were collected in the background spectra that were obtained as part of the stepping pattern, for example, and to see if your object was observed in the vicinity of the SAA. GHRS ISR 070 discusses measurements of the background for Side 2 in detail, and GHRS ISR 085 describes the model used for estimating background counts.

[Top] [Prev] [Next] [Bottom]

1 The values in the GHRS Instrument Handbook were for purposes of observation planning and are not the final nor best information available. The CDBS files are up-to-date and based on our cumulative experience since launch.

2 More properly this is the inverse sensitivity. We will ignore the distinction here.

Copyright © 1997, Association of Universities for Research in Astronomy. All rights reserved. Last updated: 01/14/98 15:56:00