STScI_Python Version 2.8 Release Notes
11 May 2009
This release includes new versions of PyRAF, PyFITS, pysynphot, pytools,
Multidrizzle and related tasks. This release also includes more
instrument-specific packages to support WFPC2, NICMOS, STIS and COS.
The convolve, image, and ndimage packages have been included with
this release to insure compatibility with the rest of our code. A
distribution of configobj has also been included in this release for
the first time to insure compatibility with the new parameter editor
GUI, teal.
Platform Support
================
Normally, platform support issues do not apply to Python tasks,
as most Python code will run on all platforms on which Python has
been installed. This distribution was tested to correctly support
installation on Linux, Mac OS X, and Solaris, while also being
provided for installation under Windows. The single exception is
that the text-based epar functionality now available in PyRAF
(in addition to the already existing GUI-based epar) is not
available under Solaris, and likely will never be. In addition,
PyRAF currently requires the installation of IRAF, yet the only
IRAF distribution for Windows runs under Cygwin and no testing of
PyRAF has been conducted under Cygwin.
Documentation
=============
Documentation for these tasks has been consolidated into a
single location complete with an index viewable as a local web
page. The documentation includes any available user guides and
API documentation generated using 'EPYDOC' for the modules or
packages. This index can be accessed using:
--> import stscidocs
--> stscidocs.viewdocs()
--> stscidocs.help()
This will automatically bring up the default web browser application
to display the index page. All the available documentation for
software packaged with this distribution can be accessed through this
interface, as this documentation resides locally upon installation.
Python Environment
==================
This release has been tested using Python 2.5.1 and 2.5.2 and
numpy 1.1.0. This release may work on versions of Python as old as
Python 2.4 depending on the module being used. Some modules which
rely heavily on numpy may require features available only under
newer versions of numpy which are not compatible with or available for
the older versions of Python.
PyRAF Version 1.7.1
-------------------
Since the last full release of PyRAF (v1.7), only a few bug fixes and
enhancements have been made. The majority of the PyRAF development fell
under the umbrella of a refactoring effort so that the PyRAF software dealing
with task parameters could be used more generally outside of PyRAF. A large
set of code was generalized and moved to "Pytools", including the backbone of
the EPAR GUI.
EPAR itself is still available within PyRAF, but has been enhanced as a result
of this effort. The Load and Save-As capabilities (ticket #87) have had their
finishing touches applied. For example, the list of Load file choices is
now pre-populated and shown in a pull-down menu from the top of the GUI.
EPAR's issue with the scrollbar slider (jumping from first to last parameter
upon clicking Unlearn) has been fixed, the help text has been corrected,
and on native OSX, EPAR now has a default color (besides black and white).
In addition, the following bugs have been fixed:
- Corrected the bug which raised an exception with "pyraf -h" (ticket #96)
- Fixed an undocumented bug in TPAR which would raise an exception
unnecessarily when checking parameter values (r961)
- Changed gki_psikern_tests.py tests so that they run much more reliably
on Solaris (r946-9)
- Corrected the definition of the color blue for plotting and overplotting
so that now it looks blue not black (thanks to Erik Tollerud) (r994)
PyFITS Version 2.1.1 (April 22, 2009)
--------------------------------------
Updates described in this release are only supported in the NUMPY version of Pyfits.
The following bugs were fixed:
* Extensive changes were made to the tiled image compression code
to support the latest enhancements made in CFITSIO version 3.13
to support this convention.
* Eliminated a memory leak in the tiled image compression code.
* Corrected a bug in the FITS_record.__setitem__ method which
raised a NameError exception when attempting to set a value in
a FITS_record object.
* Corrected a bug that caused a TypeError exception to be
raised when reading fits files containing large table HDU's
(>2Gig).
* Corrected a bug that caused a TypeError exception to be raised
for all calls to the warnings module when running under Python
2.6. The formatwarning method in the warnings module was changed
in Python 2.6 to include a new argument.
* Corrected the behavior of the membership (in) operator in the
Header class to check against header card keywords instead of
card values.
* Corrected the behavior of iteration on a Header object. The
new behavior iterates over the unique card keywords instead of
the card values.
* Corrected a bug that caused an exception to be raised when
closing a file opened for append, where an HDU was appended to
the file, after data was accessed from the file. This exception
was only raised when running on a Windows platform.
* Updated the installation scripts, compression source code, and
benchmark test scripts to properly install, build, and execute
on a Windows platform.
* Eliminated a memory leak when reading Table HDU's from a fits file.
* Corrected a bug that occurs when appending a binary table HDU
to a fits file. Data was not being byteswapped on little endian
machines.
* Corrected a bug that occurs when trying to write an ImageHDU
that is missing the required PCOUNT card in the header. An
UnboundLocalError exception complaining that the local variable
'insert_pos' was referenced before assignment was being raised in
the method _ValidHDU.req_cards. The code was modified so that it
would properly issue a more meaningful ValueError exception with
a description of what required card is missing in the header.
* Eliminated a redundant warning message about the PCOUNT card
when validating an ImageHDU header with a PCOUNT card that is
missing or has a value other than 0.
The following enhancements were made:
* Added new tdump and tcreate capabilities to pyfits.
o The new tdump convenience function allows the contents
of a binary table HDU to be dumped to a set of three files
in ASCII format. One file will contain column definitions,
the second will contain header parameters, and the third
will contain header data.
o The new tcreate convenience function allows the creation
of a binary table HDU from the three files dumped by the
tdump convenience function.
o The primary use for the tdump/tcreate methods are to allow
editing in a standard text editor of the binary table data
and parameters.
* Added support for case sensitive values of the EXTNAME card in
an extension header.
o By default, pyfits converts the value of EXTNAME cards
to upper case when reading from a file. A new convenience
function (setExtensionNameCaseSensitive) was implemented
to allow a user to circumvent this behavior so that the
EXTNAME value remains in the same case as it is in the file.
o With the following function call, pyfits will maintain
the case of all characters in the EXTNAME card values
of all extension HDU's during the entire python session,
or until another call to the function is made:
>>> import pyfits
>>> pyfits.setExtensionNameCaseSensitive()
o The following function call will return pyfits to its default
(all upper case) behavior:
>>> pyfits.setExtensionNameCaseSensitive(False)
* Added support for reading and writing FITS files in which the
value of the first card in the header is 'SIMPLE=F'. In this case,
the pyfits open function returns an HDUList object that contains
a single HDU of the new type _NonstandardHDU. The header for this
HDU is like a normal header (with the exception that the first
card contains SIMPLE=F instead of SIMPLE=T). Like normal HDU's
the reading of the data is delayed until actually requested. The
data is read from the file into a string starting from the first
byte after the header END card and continuing till the end of
the file. When written, the header is written, followed by the
data string. No attempt is made to pad the data string so that
it fills into a standard 2880 byte FITS block.
* Added support for FITS files containing extensions with
unknown XTENSION card values. Standard FITS files support
extension HDU's of types TABLE, IMAGE, BINTABLE, and
A3DTABLE. Accessing a nonstandard extension from a FITS file
will now create a _NonstandardExtHDU object. Accessing the
data of this object will cause the data to be read from the
file into a string. If the HDU is written back to a file the
string data is written after the Header and padded to fill a
standard 2880 byte FITS block.
* Provide initial support for an image compression convention
known as the "Tiled Image Compression Convention".
o The principle used in this convention is to first
divide the n-dimensional image into a rectangular grid
of subimages or "tiles". Each tile is then compressed as
a continuous block of data, and the resulting compressed
byte stream is stored in a row of a variable length column
in a FITS binary table. Several commonly used algorithms
for compressing image tiles are supported. These include,
GZIP, RICE, H-Compress and IRAF pixel list (PLIO).
o Support for compressed image data is provided using the
optional "pyfitsComp" module contained in a C shared library
(pyfitsCompmodule.so).
o The header of a compressed image HDU appears to the
user like any image header. The actual header stored in
the FITS file is that of a binary table HDU with a set of
special keywords, defined by the convention, to describe the
structure of the compressed image. The conversion between
binary table HDU header and image HDU header is all performed
behind the scenes. Since the HDU is actually a binary table,
it may not appear as a primary HDU in a FITS file.
o The data of a compressed image HDU appears to the user
as standard uncompressed image data. The actual data is
stored in the fits file as Binary Table data containing
at least one column (COMPRESSED_DATA). Each row of this
variable-length column contains the byte stream that was
generated as a result of compressing the corresponding
image tile. Several optional columns may also appear. These
include, UNCOMPRESSED_DATA to hold the uncompressed pixel
values for tiles that cannot be compressed, ZSCALE and ZZERO
to hold the linear scale factor and zero point offset which
may be needed to transform the raw uncompressed values back
to the original image pixel values, and ZBLANK to hold the
integer value used to represent undefined pixels (if any)
in the image.
o To create a compressed image HDU from scratch, simply
construct a CompImageHDU object from an uncompressed image
data array and its associated image header. From there,
the HDU can be treated just like any image HDU.
>>> hdu=pyfits.CompImageHDU(imageData,imageHeader)
>>> hdu.writeto('compressed_image.fits')
o The signature for the CompImageHDU initializer method describes
the possible options for constructing a CompImageHDU object:
def __init__(self, data=None, header=None, name=None,
compressionType='RICE_1',
tileSize=None,
hcompScale=0.,
hcompSmooth=0,
quantizeLevel=16.):
"""data: data of the image
header: header to be associated with the
image
name: the EXTNAME value; if this value
is None, then the name from the
input image header will be used;
if there is no name in the input
image header then the default name
'COMPRESSED_IMAGE' is used
compressionType: compression algorithm 'RICE_1',
'PLIO_1', 'GZIP_1', 'HCOMPRESS_1'
tileSize: compression tile sizes default
treats each row of image as a tile
hcompScale: HCOMPRESS scale parameter
hcompSmooth: HCOMPRESS smooth parameter
quantizeLevel: floating point quantization level;
"""
* Added two new convenience functions. The setval function
allows the setting of the value of a single header card in a fits
file. The delval function allows the deletion of a single header
card in a fits file.
* A modification was made to allow the reading of data from a fits
file containing a Table HDU that has duplicate field names. It
is normally a requirement that the field names in a Table HDU be
unique. Prior to this change a ValueError was raised, when the
data was accessed, to indicate that the HDU contained duplicate
field names. Now, a warning is issued and the field names are
made unique in the internal record array. This will not change
the TTYPEn header card values. You will be able to get the data
from all fields using the field name, including the first field
containing the name that is duplicated. To access the data of
the other fields with the duplicated names you will need to use
the field number instead of the field name. (CNSHD737193)
* An enhancement was made to allow the reading of unsigned
integer 16 values from an ImageHDU when the data is signed
integer 16 and BZERO is equal to 32784 and BSCALE is equal to
1 (the standard way for scaling unsigned integer 16 data). A
new optional keyword argument (uint16) was added to the open
convenience function. Supplying a value of True for this argument
will cause data of this type to be read in and scaled into an
unsigned integer 16 array, instead of a float 32 array. If a HDU
associated with a file that was opened with the uint16 option
and containing unsigned integer 16 data is written to a file,
the data will be reverse scaled into an integer 16 array and
written out to the file and the BSCALE/BZERO header cards will be
written with the values 1 and 32768 respectively. (CHSHD736064)
Reference the following example:
>>> import pyfits
>>> hdul=pyfits.open('o4sp040b0_raw.fits',uint16=1)
>>> hdul[1].data
array([[1507, 1509, 1505, ..., 1498, 1500, 1487],
[1508, 1507, 1509, ..., 1498, 1505, 1490],
[1505, 1507, 1505, ..., 1499, 1504, 1491],
...,
[1505, 1506, 1507, ..., 1497, 1502, 1487],
[1507, 1507, 1504, ..., 1495, 1499, 1486],
[1515, 1507, 1504, ..., 1492, 1498, 1487]], dtype=uint16)
>>> hdul.writeto('tmp.fits')
>>> hdul1=pyfits.open('tmp.fits',uint16=1)
>>> hdul1[1].data
array([[1507, 1509, 1505, ..., 1498, 1500, 1487],
[1508, 1507, 1509, ..., 1498, 1505, 1490],
[1505, 1507, 1505, ..., 1499, 1504, 1491],
...,
[1505, 1506, 1507, ..., 1497, 1502, 1487],
[1507, 1507, 1504, ..., 1495, 1499, 1486],
[1515, 1507, 1504, ..., 1492, 1498, 1487]], dtype=uint16)
>>> hdul1=pyfits.open('tmp.fits')
>>> hdul1[1].data
array([[ 1507., 1509., 1505., ..., 1498., 1500., 1487.],
[ 1508., 1507., 1509., ..., 1498., 1505., 1490.],
[ 1505., 1507., 1505., ..., 1499., 1504., 1491.],
...,
[ 1505., 1506., 1507., ..., 1497., 1502., 1487.],
[ 1507., 1507., 1504., ..., 1495., 1499., 1486.],
[ 1515., 1507., 1504., ..., 1492., 1498., 1487.]],
dtype=float32)
* Enhanced the message generated when a ValueError exception is
raised when attempting to access a header card with an unparsable
value. The message now includes the Card name.
numdisplay version 1.5.4
------------------------
This release of V1.5.4 changes the default behavior of the 'bufname'
parameter so that it automatically selects the buffer size based
on image shape, while setting 'bufname=iraf' will have it use the
'stdimage' environment/IRAF variable value for the buffer definition.
In addition, several smaller changes have been made to allow this
package to be installed independent of the rest of the STScI_Python
release, if desired.
imagestats 1.2
--------------
No changes in the imagestats package since the last release.
Pytools
-------
Significant additions and revisions have been made to this package in
support of new software which will be delivered with the next release.
Changes to existing modules include:
- modifying the 'getExtn()' function in the 'fileutil' module to
accept the results from 'parseExtn()' as input.
- changing 'parseExtn()' in the 'fileutil' module to now return
a tuple of '(extname,extver)' as output. This allows the output
to be passed directly to PyFITS as the 'ext' keyword.
- changing the default behavior of 'makewcs' to always perform the
time-dependent (TDD) correction for ACS data unless the TDDCORR
keyword is set to OMIT.
- revisions to the 'nimageiter' module to catch cases where input
images are small enough to fit in a single buffer, while also
working properly on larger images where the last buffer needs to
be large enough for the scrolling buffer.
- correcting a problem in 'nmpfit' when the covariance matrix is
not fully populated
A new GUI parameter handling interface based on ConfigObj has been
developed: the Task Editor And Launcher (TEAL). This GUI provides
a way to edit parameters for a Python task through the use of ConfigObj
classes and parameter files (with type-aware value checking as appropriate)
and, if desired, to run the task from the GUI. This interface can easily
be used with any Python task employing parameters. The user has the option
to save a copy of the revised parameter settings for the next run. This
GUI works very much like the EPAR GUI from PyRAF, with several additional
features, such as:
- graying out sections of large parameter sets based on boolean parameters
- loading or saving custom parameter sets through a dialog
- implementing dependency logic for controlling parameter settings based on
the value of other parameters
- the ability to load parameters, check them, and execute the task
all outside of the GUI
This version of TEAL, in the 'teal' module, represents the first
implementation and has been made available as a beta-version due to
the limited amount of testing done so far. Pytools also includes the
ConfigObj software and an associated validator module (vtor_checks) to
support TEAL.
Finally, the new module 'check_files' contains the code used for
verifying the input files for MultiDrizzle, as well as separate
functions for translating files from IRAF GEIS or waiver-FITS to
multi-extension FITS files.
opuscoords
----------
The new package, opuscoords, has been added to stsci_python for
general distribution and use in the pipeline. This package currently
only implements 2 functions used by OPUS during Generic Conversion
to convert RA,Dec (from CRVAL keywords) into galactic and ecliptic
coordinates.
============
Applications
============
MultiDrizzle Version 3.3.1
--------------------------
This version, V3.3.1, only implemented the following changes:
- added a new parameter 'proc_unit' which allows the user to
specify what units should be used when running MultiDrizzle:
native units (DN,counts or electrons depending on the instrument)
or electrons. The default behavior has been set to 'electrons'.
- fixed a bug in the use of the image iterator (also required a
fix in nimageiter in pytools)
PyDrizzle Version 6.3.1
-----------------------
Only 2 updates have been made to PyDrizzle since the last release.
The first revision was to fix a bug in the C extension to allow it
to compile successfully in full 64-bit mode on all supported platforms
(Linux, Mac OS X, Solaris). The second allows PyDrizzle to work with the
correct DQ extension when specifying the 'group' parameter.
CALCOS 2.8a
-----------
This version of CALCOS, Version 2.8a, will support COS data analysis
during the Servicing Mission Observatory Verification (SMOV) program
after installation of COS on HST during Servicing Mission 4 (SM4). The
changes from the previously released version include:
- Improvements to wavecal processing
- Proper handling of images with no data, including background
regions of subarrays
- Support of imaging wavecals
- TDSCORR now warns the user when using a dummy reference file
- Keyword SP_SLOPE (specifying the slope of the spectrum) was replaced
by SP_SLP_A and SP_SLP_B for FUV data and SP_SLP_A, SP_SLP_B,
and SP_SLP_C for NUV data when OBSTYPE=SPECTROSCOPIC.
- Shifting of the flt and counts images for ACCUM data to include the
offsets determined by wavecal processing.
- revisions to widen the regions read from the BPIXTAB by the offset
determined by wavecal processing when applying the bad pixel flags
to the DQ extension
- changes to add a DQ column to the X1D products
- changes to write a trailer file for each exposure, containing the
information relevant for just that exposure
- updates to the header to include photometric keywords for imaging data
- various revisions to more transparently handle zero exposure time data
and out-of-bounds pixels like those from subarray data at the edge
of the chip
- Flagging out-of-bounds was improved, including (for FUV data)
geometric correction of the border
- NUV dispersion axis is now 250 pixels longer in calibrated
images to allow for fpoffset
- Deadtime correction was revised
- For ACCUM data, a pseudo-corrtag file is now created, and most
processing of ACCUM data is done using the same functions as
for TIME-TAG data
- Keyword SDQFLAGS is now used when setting the DQ_WGT column
- The FUV and NUV corrtag tables now have the same columns
A couple of modes will not be fully calibrated in the pipeline; namely:
- ACQ image data will not be calibrated yet,
- wavelength calibration for NUV data needs improvement, and
- imaging wavecal for TIME-TAG TAGFLASH data in direct imaging mode can not be used to compensate for mirror/mechanism drift yet.
NICMOS Data Analysis
--------------------
The pipeline task 'runcalsaa' was completely redesigned so that the
CAL file is always the final product, and includes keywords that show
whether the bright-earth persistence (BEP) and SAA removal have been
applied. Changes were also made to insure that pedestal removal was
not applied to the final product itself, but used in the BEP and SAA
correction steps.
In addition, 'runtempfrombias', 'nic_rem_persist' and 'persutil' were
all modified to separate IRAF CL parameter checking (and dependence
on PyRAF/IRAF) from the Python code. This allows them to be run from
Python directly without installing PyRAF/IRAF.
STIS Data Analysis
------------------
No changes since the last release.
WFPC2 Data analysis
-------------------
No changes since the last release.