STScI Logo

Release Notes for stsci_python
stsci_python 2.10 Release Notes


STScI_Python Version 2.10 Release Notes

June 2010

This release includes new versions of PyRAF, PyFITS, pysynphot, pytools,
Multidrizzle, and related tasks.  This release also includes
instrument-specific packages to support WFPC2, NICMOS, STIS and COS.

The convolve, image, and ndimage packages have been included with
this release to insure compatibility with the rest of our code.

Platform Support 
================
Normally, platform support issues do not apply to Python tasks,
as most Python code will run on all platforms on which Python has
been installed. This distribution was tested to correctly support
installation on Linux, Mac OS X 10.5 (Leopard), and Solaris, while
also being provided for installation under Windows. The single
exception is that the text-based epar functionality now available
in PyRAF (in addition to the already existing GUI-based epar) is
not available under Solaris, and likely will never be. As of the
2010 release, PyRAF no longer requires the installation of IRAF.
Note that the only IRAF distribution for Windows runs under Cygwin,
but no testing of PyRAF has been conducted under Cygwin.
 
Documentation
=============
Documentation for these tasks has been consolidated into a
single location complete with an index viewable as a local web
page. The documentation includes any available user guides and
API documentation generated using 'EPYDOC' for the modules or
packages. This index can be accessed using:
 
 --> import stscidocs
 --> stscidocs.viewdocs()
 --> stscidocs.help()
 
This will automatically bring up the default web browser application
to display the index page. All the available documentation for
software packaged with this distribution can be accessed through this
interface, as this documentation resides locally upon installation.


Python Environment
================== 
This release has been tested using Python 2.5.4 and requires
numpy 1.3. This release has also been tested under Python 2.6, but only 
under Linux. 

Future Division support
-----------------------
    - Future division support has been added to all our code in preparation
    for Python 2.6 to 3.0 transition.
    

PyRAF Version 1.9
-----------------
PyRAF v1.9 supports Python 2.5, 2.6 and 2.7-beta.  Since the 1.8.1 bundled release, the following enhancements have been made:

   - Matplotlib graphics kernel speed improvement: PyRAF v1.6 brought
   us the optional matplotlib graphics kernel (ticket #80). It provided
   cleanly resizable fonts and much smoother looking graphics than
   the default Tk kernel, but it was slower. This new version makes
   some assumptions about the GKI command stream from the IRAF task
   which allows it to severely limit the amount of drawing to the
   matplotlib canvas, resulting in improved performance. In some cases,
   graphics render as much as 5 to 10 times faster. (ticket #122)

   - PyRAF without IRAF: PyRAF can now run in "No-IRAF" mode, where
   the PyRAF Python shell will run, and - although no IRAF tasks will
   be found (imheader, implot, etc.) - the basic capabilities of the
   PyRAF command-line are still supported.  Some examples of working
   features are:

       - parameter handling (e.g. epar/tpar/lpar/dpar)
       - CL script conversion to Python (except of course any IRAF tasks called)
       - both Python and CL syntax on the command-line
       - minimum matching
       - tab-completion
       - aliases
       - command history (up-arrow), which is persistent after exit
       - access to shell variables and commands
       - spawning native executables
       - behind-the-scenes conversion to parenthesis, "sleep 3" -> "sleep(3)"
       - the full span of Python's capabilities and Tkinter graphics

      Please see PyRAF FAQ #1.3 for a full description. (ticket #107)

   - Command history persistence was added to PyRAF. Upon starting a
   PyRAF session, the up-arrow can be used to recall commands from
   the previous session(s), similar to ipython or your favorite
   shell. (ticket #115)

   - X11 graphics initialization has been changed to help Linux users
   (see FAQ #5.7). By setting an environment variable within the
   Python process, PyRAF now preempts the majority of issues with
   the X11 composite extension. (ticket #123)

   - Mouse-wheels are now supported in EPAR (and TEAL) for those
   Python installations with Tk 8.5 or better. (ticket #120)

   - Commonly used invocation options may now be specified in the
   environment variable named 'PYRAF_ARGS'. (ticket #114)

   - For PyRAF-only users (as opposed to those downloading all of
   stsci_python), installation instructions have been created on the
   web site. A README file was also created to direct such users to
   that page. (ticket #121)

The following bugs have been fixed:

   - Testing under Python 2.7-beta revealed an import error which
   was due to a change in Python's own UserDict type. (ticket #126)

   - PyRAF on OSX 64-bit Python had an issue where mouse-warping
   during graphics tasks left the mouse cursor in the top left corner
   of the screen. (ticket #110)

   - PyRAF initialization code was refactored so that PyRAF will
   perform no graphics setup when run in non-graphics mode (e.g. setenv
   PYRAF_NO_DISPLAY 1). (r1092-1096)

   - A bug in "tpar" was causing control characters to be shown on the
   screen (making navigation difficult) upon the second call within
   the same PyRAF session (OSX). This has been fixed. (ticket #117)

   - Also on the Mac, the "help" command was emitting BadWindow
   X errors when called from PyRAF running inside the Terminal
   application. This was related to X11 focus-switching which was
   not actually necessary in that specific situation. (ticket #124)

   - sleep: version 1.8.1 included the cleaning up of some command-line
   functions' signatures. The sleep() function had been changed to
   require an argument (seconds), since a call with no arguments
   was equivalent to "sleep 0" which does not seem useful. However,
   there are existing IRAF CL scripts with such "sleep" calls, so
   PyRAF was changed in 1.8.2 to re-allow this. (r1120)


PyFITS Version 2.3.1
--------------------
Updates described in this release include all revisions and bug fixes
implemented since the last public release of STScI_Python which included
PyFITS Version 2.2.2 (12-October-2009).  PyFITS now only supports the
NUMP array package.  Support for NUMARRAY has been eliminated from PyFITS.

The following enhancements were made:

    - Rework documentation to use Sphinx. 

    - Added method HDUList.filename() to return the name of a file associated
      with an HDU.

    - Support the Python 2.5 'with' statement when opening files as in the
      following:

      >>> with pyfits.open("input.fits") as hdul:
      ...    #process hdul

    - Added support for writing FITS data to file-like objects that do not
      support the random access methods seek() and tell().  Most PyFITS 
      functions or methods will treat these file-like objects as an empty file
      that cannot be read, only written.  It is also expected that the
      file-like object is in a writable condition (ie. opened) when passed into
      a PyFITS function or method.  The following methods and functions will
      allow writing to a non-random access file-like object: HDUList.writeto(),
      HDUList.flush(), pyfits.writeto(), and pyfits.append().  The pyfits.open()
      convenience function may be used to create an HDUList object that is
      associated with the provided file-like object.

      An illustration of the new capabilities follows.  In this example FITS
      data is written to standard output which is associated with a file opened
      in write-only mode:

      >>> import pyfits
      >>> import numpy as np
      >>> import sys
      >>>
      >>> hdu = pyfits.PrimaryHDU(np.arange(100,dtype=np.int32))
      >>> hdul = pyfits.HDUList()
      >>> hdul.append(hdu)
      >>> tmpfile = open('tmpfile.py','w')
      >>> sys.stdout = tmpfile
      >>> hdul.writeto(sys.stdout, clobber=True)
      >>> sys.stdout = sys.__stdout__
      >>> tmpfile.close()
      >>> pyfits.info('tmpfile.py')
      Filename: tmpfile.py
      No.    Name         Type      Cards   Dimensions   Format
      0    PRIMARY     PrimaryHDU       5  (100,)        int32

    - Support for reading unsigned integer 16 values from an ImageHDU extended
      to include unsigned integer 32 and unsigned integer 64 values.
      ImageHDU data is considered to be unsigned integer 16 when the data type
      is signed integer 16 and BZERO is equal to 2**15 (32784) and BSCALE is
      equal to 1.  ImageHDU data is considered to be unsigned integer 32 when
      the data type is signed integer 32 and BZERO is equal to 2**31 and BSCALE
      is equal to 1.  ImageHDU data is considered to be unsigned integer 64
      when the data type is signed integer 64 and BZERO is equal to 2**63 and
      BSCALE is equal to 1.  An optional keyword argument (uint) was added to
      the open convenience function for this purpose.  Supplying a value of
      True for this argument will cause data of any of these types to be read
      in and scaled into the appropriate unsigned integer array (uint16,
      uint32, or uint64) instead of into the normal float 32 or float 64 array.
      If an HDU associated with a file that was opened with the 'int' option
      and containing unsigned integer 16, 32, or 64 data is written to a file,
      the data will be reverse scaled into a signed integer 16, 32, or 64 array
      and written out to the file along with the appropriate BSCALE/BZERO
      header cards.  Note that for backward compatability, the 'uint16'keyword
      argument will still be accepted in the open function when handling
      unsigned integer 16 conversion.

    - Added the capability to access the data for a column of a FITS table by
      indexing the table using the column name.  This is consistent with Record
      Arrays in NUMPY (array with fields).  The following example will
      illustrate this:

      >>> import pyfits
      >>> hdul = pyfits.open('input.fits')
      >>> table = hdul[1].data
      >>> table.names
      ['c1','c2','c3','c4']
      >>> print table.field('c2') # this is the data for column 2
      ['abc' 'xy']
      >>> print table['c2'] # this is also the data for column 2
      array(['abc', 'xy '],
            dtype='|S3')
      >>> print table[1] # this is the data for row 1
      (2, 'xy', 6.6999997138977054, True)

    - Provided support for slicing a FITS_record object.  The FITS_record
      object represents the data from a row of a table.  PyFITS now supports
      the slice syntax to retrieve values from the row.  The following
      illustrates this new syntax:

      >>> hdul = pyfits.open('table.fits')
      >>> row = hdul[1].data[0]
      >>> row
      ('clear', 'nicmos', 1, 30, 'clear', 'idno= 100')
      >>> a, b, c, d, e = row[0:5]
      >>> a
      'clear'
      >>> b
      'nicmos'
      >>> c
      1
      >>> d
      30
      >>> e
      'clear'
 
    - Added capability to create a BinaryTableHDU directly from a numpy Record
      Array (array with fields). The new capability includes table creation,
      writing a numpy Record Array directly to a FITS file using the
      pyfits.writeto and pyfits.append convenience functions.  Reading the data
      for a BinaryTableHDU from a FITS file directly into a numpy Record Array
      using the pyfits.getdata convenience function.  The following illustrates
      these new capabilities:

      >>> import pyfits
      >>> import numpy

      >>> t=numpy.zeros(5,dtype=[('x','f4'),('y','2i4')]) \
      ... # Create a numpy Record Array with fields

      >>> hdu = pyfits.BinTableHDU(t) \
      ... # Create a Binary Table HDU directly from the Record Array
      >>> print hdu.data
      [(0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))]

      >>> hdu.writeto('test1.fits',clobber=True) \
      ... # Write the HDU to a file
      >>> pyfits.info('test1.fits')
      Filename: test1.fits
      No.    Name         Type      Cards   Dimensions   Format
      0    PRIMARY     PrimaryHDU       4  ()            uint8
      1                BinTableHDU     12  5R x 2C       [E, 2J]

      >>> pyfits.writeto('test.fits', t, clobber=True) \
      ... # Write the Record Array directly to a file

      >>> pyfits.append('test.fits', t) \
      ... # Append another Record Array to the file
      >>> pyfits.info('test.fits')
      Filename: test.fits
      No.    Name         Type      Cards   Dimensions   Format
      0    PRIMARY     PrimaryHDU       4  ()            uint8
      1                BinTableHDU     12  5R x 2C       [E, 2J]
      2                BinTableHDU     12  5R x 2C       [E, 2J]

      >>> d=pyfits.getdata('test.fits',ext=1) \
      ... # Get the first extension from the file as a FITS_rec
      >>> print type(d)
      
      >>> print d
      [(0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))
       (0.0, array([0, 0], dtype=int32))]

      >>> d=pyfits.getdata('test.fits',ext=1,view=numpy.ndarray) \
      ... # Get the first extension from the file as a numpy Record Array
      >>> print type(d)
      
      >>> print d
      [(0.0, [0, 0]) (0.0, [0, 0]) (0.0, [0, 0]) (0.0, [0, 0])
       (0.0, [0, 0])]
      >>> print d.dtype
      [('x', '>f4'), ('y', '>i4', 2)]

      >>> d=pyfits.getdata('test.fits',ext=1,upper=True, view=pyfits.FITS_rec)
      ... # Force the Record Array field names to be in upper case
            regardless of how they are stored in the file
      >>> print d.dtype
      [('X', '>f4'), ('Y', '>i4', 2)]

    - Allow the assignment of a row value for a PyFITS table using a tuple or a
      list as input.  The following example illustrates this new feature:

      >>> c1=pyfits.Column(name='target',format='10A')
      >>> c2=pyfits.Column(name='counts',format='J',unit='DN')
      >>> c3=pyfits.Column(name='notes',format='A10')
      >>> c4=pyfits.Column(name='spectrum',format='5E')
      >>> c5=pyfits.Column(name='flag',format='L')
      >>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
      >>>
      >>> tbhdu=pyfits.new_table(coldefs, nrows = 5)
      >>>
      >>> # Assigning data to a table's row using a tuple
      >>> tbhdu.data[2] = ('NGC1',312,'A Note',
      ... num.array([1.1,2.2,3.3,4.4,5.5],dtype=num.float32), True)
      >>>
      >>> # Assigning data to a tables row using a list
      >>> tbhdu.data[3] = ['JIM1','33','A Note',
      ... num.array([1.,2.,3.,4.,5.],dtype=num.float32),True]

    - Allow the creation of a Variable Length Format (P format) column from a
      list of data.  The following example illustrates this new feature:

      >>> a = [num.array([7.2e-20,7.3e-20]),num.array([0.0]), num.array([0.0])]
      >>> acol = pyfits.Column(name='testa',format='PD()',array=a)
      >>> acol.array
      _VLF([[  7.20000000e-20   7.30000000e-20], [ 0.], [ 0.]],
       dtype=object)

    - Allow the assignment of multiple rows in a table using the slice syntax.
      The following example illustrates this new feature:

      >>> counts = num.array([312,334,308,317])
      >>> names = num.array(['NGC1','NGC2','NGC3','NCG4'])
      >>> c1=pyfits.Column(name='target',format='10A',array=names)
      >>> c2=pyfits.Column(name='counts',format='J',unit='DN',
      ... array=counts)
      >>> c3=pyfits.Column(name='notes',format='A10')
      >>> c4=pyfits.Column(name='spectrum',format='5E')
      >>> c5=pyfits.Column(name='flag',format='L',array=[1,0,1,1])
      >>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
      >>>
      >>> tbhdu1=pyfits.new_table(coldefs)
      >>>
      >>> counts = num.array([112,134,108,117])
      >>> names = num.array(['NGC5','NGC6','NGC7','NCG8'])
      >>> c1=pyfits.Column(name='target',format='10A',array=names)
      >>> c2=pyfits.Column(name='counts',format='J',unit='DN',
      ... array=counts)
      >>> c3=pyfits.Column(name='notes',format='A10')
      >>> c4=pyfits.Column(name='spectrum',format='5E')
      >>> c5=pyfits.Column(name='flag',format='L',array=[0,1,0,0])
      >>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
      >>>
      >>> tbhdu=pyfits.new_table(coldefs)
      >>> tbhdu.data[0][3] = num.array([1.,2.,3.,4.,5.],
      ... dtype=num.float32)
      >>>
      >>> tbhdu2=pyfits.new_table(tbhdu1.data, nrows=9)
      >>>
      >>> # Assign the 4 rows from the second table to rows 5 thru
      ...   8 of the new table.  Note that the last row of the new
      ...   table will still be initialized to the default values.
      >>> tbhdu2.data[4:] = tbhdu.data
      >>>
      >>> print tbhdu2.data
      [ ('NGC1', 312, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), True)
        ('NGC2', 334, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), False)
        ('NGC3', 308, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), True)
        ('NCG4', 317, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), True)
        ('NGC5', 112, '0.0', array([ 1.,  2.,  3.,  4.,  5.],
      dtype=float32), False)
        ('NGC6', 134, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), True)
        ('NGC7', 108, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), False)
        ('NCG8', 117, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), False)
        ('0.0', 0, '0.0', array([ 0.,  0.,  0.,  0.,  0.],
      dtype=float32), False)]

The following bugs were fixed:

    - Corrected bugs in HDUList.append and HDUList.insert to correctly handle
      the situation where you want to insert or append a Primary HDU as
      something other than the first HDU in an HDUList and the situation where
      you want to insert or append an Extension HDU as the first HDU in an
      HDUList.

    - Corrected a bug involving scaled images (both compressed and not
      compressed) that include a BLANK, or ZBLANK card in the header.  When the
      image values match the BLANK or ZBLANK value, the value should be
      replaced with NaN after scaling.  Instead, PyFITS was scaling the BLANK
      or ZBLANK value and returning it.

    - Corrected a bug involving reading and writing compressed image data.
      When written, the header keyword card ZTENSION will always have the value
     'IMAGE' and when read, if the ZTENSION value is not 'IMAGE' the user will
     receive a warning, but the data will still be treated as image data.

    - Corrected a byteswapping bug that occurs when writing certain column data.

    - Corrected a bug that occurs when creating a column from a chararray when
      one or more elements are shorter than the specified format length.  The
      bug wrote nulls instead of spaces to the file.

    - Corrected a bug in the HDU verification software to ensure that the
      header contains no NAXISn cards where n > NAXIS.

    - Corrected a bug that restricted the ability to create a custom HDU class
      and use it with PyFITS.  The bug fix will allow something like this:

      >>> import pyfits
      >>> class MyPrimaryHDU(pyfits.PrimaryHDU):
      ...     def __init__(self, data=None, header=None):
      ...         pyfits.PrimaryHDU.__init__(self, data, header)

      ...     def _summary(self):
      ...         """
      ...         Reimplement a method of the class.
      ...         """
      ...         s = pyfits.PrimaryHDU._summary(self)
      ...         # change the behavior to suit me.
      ...         s1 = 'MyPRIMARY ' + s[11:]
      ...         return s1
      ...
      >>> hdul=pyfits.open("pix.fits",
      ... classExtensions={pyfits.PrimaryHDU: MyPrimaryHDU})

      >>> hdul.info()
      Filename: pix.fits
      No.    Name         Type      Cards   Dimensions   Format
      0    MyPRIMARY  MyPrimaryHDU     59  (512, 512)    int16
      >>>

    - Modified ColDefs.add_col so that instead of returning a new ColDefs
      object with the column added to the end, it simply appends the new column
      to the current ColDefs object in place.

    - Corrected a bug in ColDefs.del_col which raised a KeyError exception when
      deleting a column from a ColDefs object.

    - Modified the open convenience function so that when a file is opened in
      readonly mode and the file contains no HDU's an IOError is raised.

    - Modified _TableBaseHDU to ensure that all locations where data is
      referenced in the object actually reference the same ndarray, instead of
      copies of the array.

    - Corrected a bug in the Column class that failed to initialize data when
      the data is a boolean array.

    - Corrected a bug that caused an exception to be raised when creating a
      variable length format column from character data (PA format).

    - Modified installation code so that when installing on Windows, when a C++
      compiler compatable with the Python binary is not found, the installation
      completes with a warning that all optional extension modules failed to
      build.  Previously, an Error was issued and the installation stopped.
 
    - Replaced code in the Compressed Image HDU extension which was covered
      under a GNU General Public License with code that is covered under a BSD
      License.  This change allows the distribution of PyFITS under a BSD
      License.


pysynphot Version 0.8
---------------------
This version has the following dependencies:
  - numpy v1.0 or higher
  - PyFITS v1.1 or higher
  - matplotlib v0.9 or higher


Changes from version 0.7 include:

  - support added for subracting spectra
  - partial support for new keywords to countrate:
      - range to specify integration range
      - force=True to force the integration when the specified range 
        only partially overlaps the observed spectrum
  - .thermback() method on the ObsBandpass class to provide user access to 
     the basic SYNPHOT.thermback functionality
  - bugfix to units.py so changes to setref(area) properly take effect
  - bugfix to observation.sample() 
  - improved doctrings for extinction and analytic spectra

More details can be seen in the changelog at

https://svn.stsci.edu/trac/ssb/astrolib/log

============
Applications
============

Imagestats Version 1.3
-----------------------

  The computation of the median was completely replaced with the
  results returned by numpy, after applying any clipping to the data
  as specified by the user.  The previous computation of the median
  has been renamed as the 'midpt' result to match exactly what IRAF's
  imstatistics task returns (both in terms of the result name and
  algorithm used for the computation).  The new median result no
  longer depends on binning like IRAF's computation of the 'midpt'
  as it relies on the implementation within numpy to return a true
  median every time.

wfpc2tools
-----------

wfpc2destreak 2.2 (2010 March 12):
  - Support was added for GEIS, waivered FITS, and multi-extension FITS 
  formatted input.

STWCS
----------------
  This is the first release of an HST instrument specific library
  which provides detector to world coordinate transformations
  including all available WCS corrections. It is based on the more
  general PyWCS and WCSLIB libraries.  Its primary use is to support
  the new redesigned multidrizzle (betadrizzle), although it can be
  used interactively. STWCS consists of two subpackages - UPDATEWCS
  and WCSUTIL.
      
STWCS.UPDATEWCS: 
  - Updates the WCS of individual HST files based
  on information in reference files and header keywords. In
  general it will be run in the pipeline in order to provide
  science files with the best available WCS to archive users.
      
STWCS.WCSUTIL: 
  - Defines an HST instrument specific WCS object with
  access to all instrument specific distortions and corrections. It
  provides methods for transformation from detector to world
  coordinates applying all available corrections.  This can be
  done either directly in one-step or in a multi-step interactive
  fashion.  Examples on how to do this can be found in this
  draft document:
       http://stsdas.stsci.edu/multidrizzle/download/Implementation_of_the_Distortion_Correction_in_FITS_Headers.pdf
        
Pytools
-------
fileutil:
  - A new function 'interpretDQvalue()' was added to split a DQ value
  into its bit values.
  
check_files:
  - A new function, checkPhotKeywords, was added to pytools.check_files
  in order to insure that all copies contained the photometry keywords
  in the science extensions. This function will copy the keywords
  from the PRIMARY header if necessary.
  

Multidrizzle v3.3.7
-------------------
This version of MultiDrizzle relies on PyDrizzle Version 6.3.5, whose
updates are included here as they directly support changes in MultiDrizzle.

  - The code will now look for the 'PFLTFILE' keyword for the
  flat-field reference file for WFC3 IR data, instead of looking for
  'FLATFILE' as used by NICMOS.

  - The use of the flat-field with WFC3 IR data was updated to reflect
  the convention in place for WFC3 data, where it, like ACS, uses
  a flat that is divided into the science data, as opposed to the
  inverse-flat convention used by NICMOS.

  - Velocity aberration correction will now be applied to all HST
  data that specifies the VAFACTOR keyword, instead of applying the
  correction to ACS data only. This insures that WFC3 data will be
  properly corrected for velocity aberration.

  - The PHOTFLAM keyword now gets scaled by the gain depending on
  whether the final output units were requested to be ELECTRONS
  or COUNTS.

  - The 'group' parameter now gets used to determine which FITS header
  should be used as the basis for the output product's header.

  - Updates to PyDrizzle to support use of PRIMARY header keywords
  from IDCTAB for computing the TDD coefficients for ACS/WFC data. If
  no keywords are present, the same hard-coded values will be used
  as before for ACS/WFC data only.

CALCOS v2.12
-------------
  - The algorithm for wavecal processing was significantly improved.  More
  information is printed regarding the wavecal shifts, including the FPOFFSET
  for the exposure and an error estimate for the shift in the dispersion
  direction.  Emission lines in the wavecal spectrum that are truncated by
  the edge of the NUV detector can interfere with finding the shift, so a test
  has been added for this case.  For G140L data, the wavecal shift information
  from segment A is now copied and used to correct the data for segment B.
  Improvements were made to the code to find the location of the wavecal
  spectrum in the cross-dispersion direction.
  
  - A WAVELENGTH column was added to the corrtag table, and a GCOUNTS column
  (gross counts, as opposed to count rate) was added to the x1d and x1dsum
  tables.
  
  - Keywords pertaining to the deadtime correction are now written to the
  extension header instead of the primary header.
  
  - A new GTI table (good time intervals) will only be added to the corrtag
  file if the GTI list was actually modified (e.g. due to FUV bursts).
  There were cases where the flagged regions in the data quality extension
  were much wider than they should have been, resulting in lost data in the
  x1dsum tables.  This has been fixed.