STScI_Python Version 2.7 Release Notes
19 November 2008
This release includes new versions of PyRAF, PyFITS, pytools,
Multidrizzle and related tasks. This release also includes more
instrument-specific packages to support WFPC2, NICMOS, STIS and COS.
The convolve, image, and ndimage packages have been included with
this release to insure compatibility with the rest of our code.
No more support for Numarray or Numeric
=======================================
This release is the first release that eliminates all support
for numarray and Numeric. Numarray is no longer used within
STScI_Python. Information on making the switch to numpy from numarray
may be found at:
http://www.stsci.edu/resources/software_hardware/numarray/numarray2numpy.pdf
Platform Support
================
Normally, platform support issues do not apply to Python tasks,
as most Python code will run on all platforms on which Python has
been installed. This distribution was tested to correctly support
installation on Linux, Mac OS X, and Solaris, while also being
provided for installation under Windows. The single exception is
that the text-based epar functionality now available in PyRAF
(in addition to the already existing GUI-based epar) is not
available under Solaris, and likely will never be. In addition,
PyRAF currently requires the installation of IRAF, yet the only
IRAF distribution for Windows runs under Cygwin and no testing of
PyRAF has been conducted under Cygwin.
Documentation
=============
Documentation for these tasks has been consolidated into a
single location complete with an index viewable as a local web
page. The documentation includes any available user guides and
API documentation generated using 'EPYDOC' for the modules or
packages. This index can be accessed using:
--> import stscidocs
--> stscidocs.viewdocs()
--> stscidocs.help()
This will automatically bring up the default web browser application
to display the index page. All the available documentation for
software packaged with this distribution can be accessed through this
interface, as this documentation resides locally upon installation.
Python Environment
==================
This release has been tested using Python 2.5.1 and 2.5.2 and
numpy 1.1.0. This release may work on versions of Python as old as
Python 2.4 depending on the module being used. Some modules which
rely heavily on numpy may require features available only under
newer versions of numpy which are not compatible with or available for
the older versions of Python.
PyRAF Version 1.6.1
=================== PyRAF 1.6.1 has been released (bundled with
stsci_python 2.7 only), while this release provides some initial
indications as to what will be included in PyRAF Version 1.7.
Additional details about the changes implemented for the ticket
numbers listed in these notes can be found at the PyRAF Trac site:
http://projects.scipy.org/astropy/pyraf/report/6
In addition to a few changes to regressing test scripts and other
minor edits, the following enhancements have been made since the
1.5 release:
- Preparations for Native-OSX (Aqua) Support in 1.7: Changes
were made to allow PyRAF to be run natively on OSX (i.e. without
the need for an X server). This capability should currently be
considered an "alpha" implementation. The default is still to
run under X on OSX, but for those interested parties, the PyRAF
Trac site describes how to enable the Aqua version at:
http://projects.scipy.org/astropy/pyraf/ticket/86
OSX 10.5 users will notice a deprecation warning until PyRAF 1.7. The
full "beta" capability will appear in PyRAF 1.7.
- EPAR and TPAR were both enhanced to allow "Save As". The
user may now save a particular version of a parameter file to
a user-specified directory (set the CL variable 'uparm_aux'
for convenience). EPAR can also "Load" such a saved .par file,
if an appropriate one is found in the 'uparm_aux' directory. TPAR
will have the "Load" capability in PyRAF 1.7. (ticket #87)
- PyRAF now includes an optional Matplotlib graphics kernel (beta
version), employing the TkAgg backend. This option is enabled by
setting the environment variable PYRAFGRAPHICS to "matplotlib". It
uses TkAgg, anti-grain geometry (AGG), to render smoother looking
plots and cleaner, more scalable fonts. The default kernel is
still the Tkinter based graphics kernel. (ticket #80)
- PyRAF graphics via the matplotlib graphics kernel now supports
matplotlib 0.98 and higher (rev 884).
- A collection of built-in IRAF functions (for error handing,
string manipulation, trig, etc.) had been added to the beta ECL
release, and they are now also available within PyRAF. The new
functions are listed under "New Builtin Functions" here.
(ticket #8)
- PyRAF's terminal parameter editor (tpar) was enhanced to
accommodate API changes made to urwid between versions 0.9.7 and
0.9.8. Tpar is now compatible with both urwid versions.
(ticket #69)
- The FAQ was moved from the old stsdas.stsci.edu site out to
the main PyRAF site. It was reformatted slightly for appearance,
and some minor corrections/updates were made. The FAQ can now
be found at:
http://www.stsci.edu/resources/software_hardware/pyraf/pyraf_faq
- In an effort to help debug a vague error during startup, the
amount of information in the 'Unknown parameter requested' error
was expanded to include the task name and package name.
(ticket #15)
The following minor API changes were made:
- The "filename" parameter was removed from the makeIrafPar()
function. Internal to the function, it was being passed to
multiple places where it was never used. It is not expected that
many external PyRAF users will have scripts which directly call
this function. (ticket #24)
- The 'deg' argument was removed from the clDms() function. It
had been deprecated since April 2004 and it has now been
removed. (ticket# 25)
The following bugs were fixed:
- To be more IRAF-compliant, 'epar' is now run when the user
executes a pset. (ticket #17)
- The substr(string, start_index, end_index) function was made
more compatible with the IRAF cl version by ensuring that it
returns an empty string if the start index is given as 0 (the
start index is 1-based like IRAF/cl strings).(ticket #82)
For example:
substr("xyz",0,3) = "", not "z" as before
- In Ipython-mode ("% pyraf --ipython"), PyRAF was in some
cases not correctly handling the asterisk for wildcarding file
names. (ticket #81)
- A stack-trace producing bug was fixed in the code which
redirects graphics output to a file (via '>G'). (ticket #77)
- The "%r" format in iraf.printf() was causing a SyntaxError for
large long integers, and it was not correctly handling negative
numbers. (ticket #79)
- The 'stty' command now correctly reports the number of lines
and columns in the terminal window when requested. If the window
is resized, 'stty resize' is still required. (ticket #20)
PyFITS Version 1.4.1 (November 4 2008)
======================================
Updates for this release are only supported in the NUMPY version of pyfits.
Enhancements implemented in this release include:
- Added support for file objects and file like objects.
* All convenience functions and class methods that
take a file name will now also accept a file
object or file like object. File like objects
supported are StringIO and GzipFile objects. Other
file like objects will work only if they implement
all of the standard file object methods.
* For the most part, file or file like objects may be
either opened or closed at function call. An opened
object must be opened with the proper mode depending
on the function or method called. Whenever possible,
if the object is opened before the method is called,
it will remain open after the call. This will not be
possible when writing a HDUList that has been resized
or when writing to a GzipFile object regardless of
whether it is resized. If the object is closed at
the time of the function call, only the name from the
object is used, not the object itself. The pyfits
code will extract the file name used by the object
and use that to create an underlying file object on
which the function will be performed.
- Added support for record-valued keyword cards as introduced
in the FITS WCS Paper IV proposal for representing a more
general distortion model.
* Record-valued Keyword cards are string-valued cards
where the string is interpreted as a definition
giving a record field name, and its floating point
value. In a FITS header they have the following
syntax:
keyword= 'field-specifier: float'
where keyword is a standard eight-character FITS
keyword name, float is the standard FITS ASCII
representation of a floating point number, and these
are separated by a colon followed by a single blank.
The grammar for field-specifier is:
field-specifier:
field
field-specifier.field
field:
identifier
identifier.index
where identifier is a sequence of letters (upper or
lower case), underscores, and digits of which the
first character must not be a digit, and index is a
sequence of digits. No blank characters may occur in
the field-specifier. The index is provided primarily
for defining array elements though it need not be
used for that purpose.
Multiple record-valued keywords of the same name but
differing values may be present in a FITS header.
The field-specifier may be viewed as part of the
keyword name.
Some examples follow:
DP1 = 'NAXIS: 2'
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.0: 0'
DP1 = 'AUX.1.POWER.0: 1'
DP1 = 'AUX.1.COEFF.1: 0.00048828125'
DP1 = 'AUX.1.POWER.1: 1'
* A FITS header consists of card images. In pyfits
each card image is manifested by a Card object.
A pyfits Header object contains a list of Card
objects in the form of a CardList object. A
record-valued keyword card image is represented in
pyfits by a RecordValuedKeywordCard object. This
object inherits from a Card object and has all of the
methods and attributes of a Card object.
* RecordValuedKeywordCards have attributes .key,
.field_specifier, .value, and .comment. Both .value
and .comment can be changed but not .key or
.field_specifier. The constructor will extract the
field-specifier from the input key or value,
whichever is appropriate. The .key attribute is the
8 character keyword.
- Enhanced the way import errors are reported to provide more
information.
There were no changes which were directed at improving the performance of PyFITS in this release.
The following bug fixes were implemented in this release:
- Corrected a bug that occurs when writing a HDU out to a
file. During the write, any Keyboard Interrupts are trapped
so that the write completes before the interrupt is handled.
Unfortunately, the Keyboard Interrupt was not properly
reinstated after the write completed. This was fixed.
- Corrected a bug when using ipython, where temporary files
created with the tempFile.NamedTemporaryFile method are not
automatically removed. This can happen for instance when
opening a Gzipped fits file or when opening a fits file over
the internet. The files will now be removed.
- Corrected a bug that occurs when retrieving variable length
character arrays from binary table HDUs (PA() format) and
using slicing to obtain rows of data containing variable
length arrays. The code issued a TypeError exception. The
data can now be accessed with no exceptions.
- Corrected a bug that occurs when retrieving data from a
fits file opened in memory map mode when the file contains
multiple image extensions or ASCII table or binary table
HDUs. The code issued a TypeError exception. The data can
now be accessed with no exceptions.
- Corrected a bug that occurs when attempting to get a subset
of data from a Binary Table HDU and then use the data to
create a new Binary Table HDU object. A TypeError exception
was raised. The data can now be subsetted and used to
create a new HDU.
- Corrected a bug that occurs when attempting to scale an
Image HDU back to its original data type using the
_ImageBaseHDU.scale method. The code was not resetting the
BITPIX header card back to the original data type. This
has been corrected.
- Changed the code to issue a KeyError exception instead of a
NameError exception when accessing a non-existent field in
a table.
- Corrected a bug that occurs when a card value is a string
and contains a colon but is not a record-valued keyword card.
- Corrected a bug where pyfits fails to properly handle a
record-valued keyword card with values using exponential
notation and trailing blanks.
numdisplay version 1.5
======================
This version addresses a number of issues; namely
- socket management was serialized to allow the overlay to work
properly on Mac OS X 10.5 where the sockets provided by OS X are threaded
- automatic reset of connection to ds9
- origin of the image in the display buffer now matches IRAF's origin
in all cases
Enhancements implemented in this version include:
- The package was revised to not require PyRAF. The numdisplay
task now uses the fileutil module from the pytools package,
instead of relying on PyRAF, to interpret the environment variable
'stdimage' to define the default image buffer to be used.
- The package structure of 'numdisplay' was also revised to put
all the source code in a 'lib' subdirectory to be consistent
with the rest of the 'stsci_python' packages.
- New modules have been added to provide for overlay capabilities
similar to 'tvmark', although a bit more limited. The overlay
module was updated to include a new method for drawing markers
(characters) based on the 'ichar' module and bit-mapped font
derived from IRAF's 5x7 font.
- This version also includes support for a 'zscale' algorithm
based on the algorithm used by IRAF's display task.
imagestats 1.2
==============
The computation of the standard-deviation has been modified to properly track the values after clipping. The accumulation of the standard deviation in computeMean.c was updated and the results match the iraf imstats results much closer now.
Pytools
=======
The following changes were made to the code in the package:
- the nimageiter module now works with large data (~2Gb image)
input, and tested using a 15600x8400 pixel image as well as
1137x1256 image.
- The irafglob function now returns an empty list if not given
valid input, instead of raising an exception.
- Several issues were addressed in 'makewcs', namely:
* improved support for WFPC2 and WFC3.
* added time-dependent distortion support for ACS/WFC data,
including updating the coefficients with the TDD corrections
and saving the original CD matrix elsewhere in the header.
* now writes out additional header keywords to support the
full interpretation of the SIP keywords as needed by PyDrizzle.
* no longer raises an Error if no IDCTAB is found. Instead,
'makewcs' warns the user, does not modify the image, and
continues on to the next image.
- Use of 'testutil' now includes the traceback for non-passing
tests in the log file.
- The new version of 'readgeis' no longer relies on 'numpy.recarray'.
- Interpretation of association tables now performed by the new
module 'asnutil'.
- The module 'fileutil' no longer tries to import PyRAF to avoid
circular imports by other Python tasks. It now picks up the PyRAF
environment automatically if it gets imported or has already been
imported for interpretation of IRAF environment variables.
============
Applications
============
MultiDrizzle Version 3.2.1
==========================
The following changes were made:
- Added new parameter 'combine_maskpt' to set the percentage of
the weight image below which a pixel is considered a bad pixel
when creating a new mask.
- Added support for UVIS and IR WFC3 data.
- Changed algorithms to use instrument's native units, rather
than always converting the data to units of electrons
- Improved the automatic IVM creation for NICMOS data.
- Updated the bunit keyword appropriately in the science extension
of the DRZ product.
- Run automatic IVM generation if user specifies it and no IVM
files available with the input image.
The following bugs were fixed:
- name of the 'runfile' automatically generated by Multidrizzle
now based on rootname of the output product if no name given in
the 'runfile' parameter
- blank string inputs from the MDRIZTAB reference file now handled correctly
PyDrizzle Version 6.3.0
=======================
- ACS Time-Dependent Skew term added and other changes.
The ACS distortion is time dependent, with the skew term changing
noticeably since deployment. At present, ACS image headers contain
distortion coefficients that were correct at the epoch of launch,
but which deteriorate in accuracy with time from launch. Errors of
a substantial fraction of a pixel can be seen in data taken in 2005.
This time dependent effect is described in Jay Anderson's ISR ACS
2007-08 found at:
http://www.stsci.edu/hst/acs/documents/isrs/isr0708.pdf
In this new version 'makewcs' creates and PyDrizzle will use
coefficient terms that are derived for the epoch of observation
using Jay Anderson's estimate of the time-dependent skew change.
The implementation in PyDrizzle and makewcs agrees with Jay
Andersons algorithm to significantly better than 0.01 pixels
(typical agreement is at the millipixel level). In turn, Jay
Anderson's solution appears to be good over the course of ACS
operation to between 0.01 and 0.05 pixels, with results varying
between image sets that are compared. This, however, is about
an order of magnitude improvement over the errors that can exist
without the inclusion of the time-dependent distortion.
The time dependent correction will only get applied, though, when
the keyword 'TDDCORR' has been added to the header and set to a
value of 'PERFORM' in the image header. Currently, the HST pipeline
does not add this keyword to the header, so this keyword needs to be
added to the Primary header by the user to turn on this correction.
While the correction provided in this version of PyDrizzle/makewcs
is far more accurate than that provided by the standard header,
it is possible that the solution or code used to implement the
solution will change slightly in the future.
Users will also note another small change to image headers.
The suffix -SIP, standing for Simple Image Polynomial, has been added
to the coordinate type header keywords which are now RA---TAN-SIP'
and 'DEC--TAN-SIP'. The suffix informs the user or code that the
header coefficients are in the SIP format. Users may already
be familiar with this format from Spitzer headers. In practice,
SIP header information has already appeared in the HST headers.
The addition of this suffix merely informs software that this
convention is being followed. This will become important in the
future for ACS software, which will soon take its distortion
information from the image header rather than from separate
coefficient files. If this change in keywords is a problem for
any of your own private software, simply removing the '-SIP' from
these values will return the header to its previous format.
The following additional changes were made:
- Support added for WFC3
- Input for PyDrizzle and Multidrizzle now interpreted using the
same code, and can accept the same range of inputs.
- Input files processed using a stand-alone module to create an
association table suitable for use with Multidrizzle and PyDrizzle
that can be passed as an in-memory object rather than a file.
- Input files in waiver FITS or GEIS format automatically converted
to multi-extension FITS (MEF) format. Only the MEF files get
updated and used by PyDrizzle, leaving the original waiver
FITS or GEIS files untouched. This primarily affects WFPC2 data.
- Updated use of DQ arrays for WFPC2 to support subarray image
where only a subset of the chips were read out
- The buildasn and updateasn modules have been removed from the
pydrizzle package. This functionality was replaced by the
asnutils module in the pytools package.
The following bugs were fixed:
- reference chip of a subarray WFPC2 observation now correctly
determined, especially if only the PC and one of the WF chips
were read out.
- Problems determine the chip-to-chip offsets for observations now
uses the correct plate-scale, even if chips have different plate
scales. This affects WFPC2 subarray data with PC and WF chip readouts.
- shifts now applied in the output frame to allow accurate
application of all shift file values, especially rotations and
shifts, without the need for iteration of the alignment
- BUNIT keyword now updated correctly in the output image based
on units specified by the user
- First image in an association now used as the template for
the output image to insure that TIME-OBS and DATE-OBS keywords,
among others, are correct
- problems interpreting @-files were fixed.
- output image header keyword VAFACTOR now always reports 1.0
as it should have been fully corrected by the distortion model
in the output image
- problems interpreting the default NICMOS coefficient tables
distributed with IRAF's dither package were corrected to use
the right plate scale for camera 3.
- methods for performing coordinate transformations within
PyDrizzle were fixed so that they can be called by the user
directly after initializing a PyDrizzle object.
CALCOS 2.5
==========
The COS pipeline calibration processing software, calcos, continues to be developed in preparation for the installation of COS on HST during Servicing Mission 4 currently scheduled for sometime in 2009. This code performs all the standard calibrations to COS data for use in pipeline processing or reprocessing by an observer.
This code continues to be developed heavily to insure the most appropriate algorithms are implemented in time for launch. Additional algorithm changes still have yet to be implemented.
The coordinates for COS raw data have been changed. For both
detectors, the dispersion direction is now in the first axis and
increases to the right.
When calibrating an association, a file with the product name
and "_jnk.fits" will be written. This is a copy of an input
"_spt.fits" file, created for use by the archive, and the user
should feel free to delete it.
NICMOS Data Analysis
====================
This package still contains modules for supporting data analysis tasks developed for pre- and post-pipeline processing of NICMOS data. They can each be imported independently and used on NICMOS data directly from Python.
Enhancements made to the code in the package include:
- adding new task to perform bright-earth persistence removal,
- adding support for all cameras when computing the temperature
from bias using CalTempFromBias
- revised default algorithm for CalTempFromBias to use the
blind correction
- changed keyword names for CalTempFromBias to be consistent
with other calibration processing switches
- Adding IRAF EPAR interface to CalTempFromBias so it can be
run directly from the STSDAS NICMOS package
STIS Data Analysis
==================
The STIS data analysis tasks are organized into a single package, stistools.
This package contains modules for supporting data analysis tasks developed for post-pipeline processing of STIS data. They can each be imported independently and used on STIS data directly from Python.
The only bug fixed was:
- interpretation of IRAF environment variables was fixed to properly
handle multiple '/' characters in the full path.
WFPC2 Data analysis
===================
This package contains modules for supporting data analysis tasks developed for pre- and post-pipeline processing of WFPC2 data. They can each be imported independently and used on WFPC2 data directly from Python.
Enhancements implemented in these versions include:
- CTE keyword computation added to WFPC2 pipeline script
- additional keywords written out to WFPC2 image header when
running destreak task
- the wfpc2destreak routine now works directly on calibrated
WFPC2 images
- user can now supply their own cosmic-ray mask when running
wfpc2destreak
Bugs fixed in this code include:
- fixing the interpretation of GEIS input to automatically
create multi-extension FITS (MEF) files and use those MEF files
for processing.
- adds CTE keywords to MEF file headers if they do not already
exist, while also printing out the results of the computation
for the user
- bugs in the computation of the background used for deriving
the CTE keywords were corrected
Examples of how to access and use Record-Valued Keywords
=========================================================
The use of record-valued keywords with PyFITS requires some enhancements to the syntax recognized by PyFITS. This section provides additional details on the use of the this new keyword format with PyFITS not only with descriptions but also examples.
- As with standard header cards, the value of a
record-valued keyword card can be accessed using
either the index of the card in a HDU's header or via
the keyword name. When accessing using the keyword
name, the user may specify just the card keyword or
the card keyword followed by a period followed by the
field-specifier. Note that while the card keyword is
case insensitive, the field-specifier is not. Thus,
hdu['abc.def'], hdu['ABC.def'], or hdu['aBc.def'] are
all equivalent but hdu['ABC.DEF'] is not.
- When accessed using the card index of the HDU's
header the value returned will be the entire string
value of the card. For example:
>>> print hdr[10]
NAXIS: 2
>>> print hdr[11]
AXIS.1: 1
- When accessed using the keyword name exclusive of the
field-specifier, the entire string value of the
header card with the lowest index having that keyword
name will be returned. For example:
>>> print hdr['DP1']
NAXIS: 2
- When accessing using the keyword name and the
field-specifier, the value returned will be the
floating point value associated with the
record-valued keyword card. For example:
>>> print hdr['DP1.NAXIS']
2.0
- Any attempt to access a non-existent record-valued
keyword card value will cause an exception to be
raised (IndexError exception for index access or
KeyError for keyword name access).
- Updating the value of a record-valued keyword card
can also be accomplished using either index or
keyword name. For example:
>>> print hdr['DP1.NAXIS']
2.0
>>> hdr['DP1.NAXIS'] = 3.0
>>> print hdr['DP1.NAXIS']
3.0
- Adding a new record-valued keyword card to an
existing header is accomplished using the
Header.update() method just like any other card.
For example:
>>> hdr.update('DP1', 'AXIS.3: 1', 'a comment',
after='DP1.AXIS.2')
- Deleting a record-valued keyword card from an
existing header is accomplished using the standard
list deletion syntax just like any other card. For
example:
>>> del hdr['DP1.AXIS.1']
- In addition to accessing record-valued keyword cards
individually using a card index or keyword name,
cards can be accessed in groups using a set of
special pattern matching keys. This access is made
available via the standard list indexing operator
providing a keyword name string that contains one or
more of the special pattern matching keys. Instead
of returning a value, a CardList object will be
returned containing shared instances of the Cards in
the header that match the given keyword specification.
- There are three special pattern matching keys. The
first key '*' will match any string of zero or more
characters within the current level of the
field-specifier. The second key '?' will match a
single character. The third key '...' must appear at
the end of the keyword name string and will match all
keywords that match the preceding pattern down all
levels of the field-specifier. All combinations of
?, *, and ... are permitted (though ... is only
permitted at the end). Some examples follow:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl=hdr['DP1.*']
>>> print cl
DP1 = 'NAXIS: 2'
DP1 = 'NAUX: 2'
>>> cl=hdr['DP1.AUX...']
>>> print cl
DP1 = 'AUX.1.COEFF.1: 0.000488'
DP1 = 'AUX.2.COEFF.2: 0.00097656'
>>> cl=hdr['DP?.NAXIS']
>>> print cl
DP1 = 'NAXIS: 2'
DP2 = 'NAXIS: 2'
DP3 = 'NAXIS: 2'
>>> cl=hdr['DP1.A*S.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
- The use of the special pattern matching keys for
adding or updating header cards in an existing header
is not allowed. However, the deletion of cards from
the header using the special keys is allowed. For
example:
>>> del hdr['DP3.A*...']
- As noted above, accessing pyfits Header object using
the special pattern matching keys will return a
CardList object. This CardList object can itself be
searched in order to further refine the list of
Cards. For example:
>>> cl=hdr['DP1...']
>>> print cl
DP1 = 'NAXIS: 2'
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.1: 0.000488'
DP1 = 'AUX.2.COEFF.2: 0.00097656'
>>> cl1=cl['*.*AUX...']
>>> print cl1
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.1: 0.000488'
DP1 = 'AUX.2.COEFF.2: 0.00097656'
- The CardList keys() method will allow the retrivial
of all of the key values in the CardList. For
example:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl.keys()
['DP1.AXIS.1', 'DP1.AXIS.2']
- The CardList values() method will allow the retrivial
of all of the values in the CardList. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl.values()
[1.0, 2.0]
- Individual cards can be retrieved from the list using
standard list indexing. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> c=cl[0]
>>> print c
DP1 = 'AXIS.1: 1'
>>> c=cl['DP1.AXIS.2']
>>> print c
DP1 = 'AXIS.2: 2'
- Individual card values can be retrieved from the list
using the value attribute of the card. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> cl[0].value
1.0
- The cards in the CardList are shared instances of the
cards in the source header. Therefore, modifying a
card in the CardList also modifies it in the source
header. However, making an addition or a deletion to
the CardList will not affect the source header. For
example:
>>> hdr['DP1.AXIS.1']
1.0
>>> cl=hdr['DP1.AXIS.*']
>>> cl[0].value = 4.0
>>> hdr['DP1.AXIS.1']
4.0
>>> del cl[0]
>>> print cl['DP1.AXIS.1']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "NP_pyfits.py", line 977, in __getitem__
return self.ascard[key].value
File "NP_pyfits.py", line 1258, in __getitem__
_key = self.index_of(key)
File "NP_pyfits.py", line 1403, in index_of
raise KeyError, 'Keyword %s not found.' % `key`
KeyError: "Keyword 'DP1.AXIS.1' not found."
>>> hdr['DP1.AXIS.1']
4.0
- A new RecordValuedKeywordCard object is created with
the RecordValuedKeywordCard constructor:
RecordValuedKeywordCard(key, value, comment). The
key and value arguments may be specified in two ways.
The key value may be given as the 8 character keyword
only, in which case the value must be a character
string containing the field-specifier, a colon
followed by a space, followed by the actual value.
The second option is to provide the key as a string
containing the keyword and field-specifier, in which
case the value must be the actual floating point
value. For example:
>>> c1 = pyfits.RecordValuedKeywordCard('DP1',
'NAXIS: 2', 'Number of variables')
>>> c2 = pyfits.RecordValuedKeywordCard('DP1.AXIS.1',
1.0, 'Axis number')
- Just like standard Cards, a RecordValuedKeywordCard
may be constructed from a string using the
fromstring() method or verified using the verify()
method. For example:
>>> c1 = pyfits.RecordValuedKeywordCard().fromstring(
"DP1 = 'NAXIS: 2' / Number of independent variables")
>>> c2 = pyfits.RecordValuedKeywordCard().fromstring(
"DP1 = 'AXIS.1: X' / Axis number")
>>> print c1; print c2
DP1 = 'NAXIS: 2' / Number of independent variables
DP1 = 'AXIS.1: X' / Axis number
>>> c2.verify()
Output verification result:
Card image is not FITS standard (unparsable value
string).
- A standard card that meets the criteria of a
RecordValuedKeywordCard may be turned into a
RecordValuedKeywordCard using the class method coerce.
If the card object does not meet the required
criteria then the original card object is just
returned.
>>> c1 = pyfits.Card('DP1','AUX: 1','comment')
>>> c2 = pyfits.RecordValuedKeywordCard.coerce(c1)
>>> print type(c2)
<class 'pyfits.NP_pyfits.RecordValuedKeywordCard'>
- Two other card creation methods are also available as
RecordVauedKeywordCard class methods. These are
createCard() which will create the appropriate card
object (Card or RecordValuedKeywordCard) given input
key, value, and comment, and createCardFromString
which will create the appropriate card object given
an input string. These two methods are also available
as convenience functions.
>>> c1 = pyfits.RecordValuedKeywordCard.createCard(
'DP1','AUX: 1','comment)
or
>>> c1 = pyfits.createCard('DP1','AUX: 1','comment)
>>> print type(c1)
<class 'pyfits.NP_pyfits.RecordValuedKeywordCard'>
>>> c1 = pyfits.RecordValuedKeywordCard.createCard(
'DP1','AUX 1','comment)
or
>>> c1 = pyfits.createCard('DP1','AUX 1','comment)
>>> print type(c1)
<class 'pyfits.NP_pyfits.Card'>
>>> c1 = pyfits.RecordValuedKeywordCard.
createCardFromString \
("DP1 = 'AUX: 1.0' / comment")
or
>>> c1 = pyfits.createCardFromString(
"DP1 = 'AUX: 1.0' / comment")
>>> print type(c1)
<class 'pyfits.NP_pyfits.RecordValuedKeywordCard'>