The MultiDrizzle Handbook
So, you have a bunch of data sitting around on disk and you would like to drizzle it all in a common and consistent manner. If all the images have similar content and this is appropriate for your data, recording your method in a nice reusable script is a great way to go! Scripting is a great way not only to record what you've done to process a particular dataset, but it allows you to re-process the data later on if you decided to make changes. It also gives you a nice framework to use for processing other datasets.
This is a simple example showing how to write a script that will keep track of your data, run it through MultiDrizzle and produce some nice information about the results. This is designed for more advanced users who were comfortable with Python/Pyraf programming and looking for a start on automating their data reduction pipeline. If you are not familiar with the
Pythonenvironment, you should still be able to use this script as a starting point and edit it to suit your needs. More advanced users might want to add extra features.
The actual data is unimportant for this example, you can add your own image names, or leave the scripts general so they are applicable to many datasets.
6.7.2 The Quick and Easy Way
If you hate scripting, or are just uncomfortable with it, there is an easy way to record what you have done that allows you to repeat your process. MultiDrizzle already has an option to save the output script commands to the file specified through the "runfile" parameter. This file saves the commands necessary to create the final drizzled file. You can also simply copy the commanded parameters you used to run MultiDrizzle into a file.
These command files can then be edited down to the important commands which need to be run together. When the PyRAF system is started using the "pyraf" command as described previously, the user's commands are actually being passed to an enhanced interpreter environment that allows use of IRAF CL emulation and provides other capabilities beyond those provided by the standard Python interpreter. In fact, when "pyraf" is typed, a special interpreter is run which is a front end to the Python interpreter. This front-end interpreter handles the translation of CL syntax to Python, command logging, filename completion, shell escapes and the like which are not available in the default Python interpreter.
It is also possible to use PyRAF from a standard Python session, which is typically started by simply typing "python" at the Unix shell prompt. In that case the simple CL syntax for calling tasks is not available and tab-completion, logging, etc., are not active, unless you are in iPython. For interactive use, the conveniences that come with the PyRAF interpreter are valuable and we expect that most users will use PyRAF in this mode.
One important thing to understand is that the alternate syntax supported by the PyRAF front end interpreter is provided purely for interactive convenience. When such input is logged, it is logged in its translated, Python form. Scripts should always use the normal Python form of the syntax. The advantage of this requirement is that such scripts need no preprocessing to be executed by Python, and so they can be freely mixed with any other Python programs. In summary, if one runs PyRAF in its default mode, the short-cut syntax can be used; but when PyRAF is being used from scripts or from the standard Python interpreter, one must use standard Python syntax (not CL-like syntax) to run IRAF tasks.
Even in Python mode, task and parameter names can be abbreviated and, for the most part, the minimum matching used by IRAF still applies. As described above, when an IRAF task name is identical to a reserved keyword in Python, it is necessary to prepend a `PY' to the IRAF task name (i.e., use iraf.PYlambda, not iraf.lambda). In Python mode, when task parameters conflict with keywords, they must be similarly modified. The statement iraf.imcalc(in="filename") will generate a syntax error and must be changed either to iraf.imcalc(PYin="filename") or to iraf.imcalc(input="filename"). This keyword/parameter conflict is handled automatically in CL emulation mode.my_computer> pyraf--> stsdas --> dither--> epar multidrizzle
- now, save the parameters to a local parameter file by using the "SaveAs" button in the EPAR GUI. You should try to pick a name that describes what data configuration this set of parameters applies to when running MultiDrizzle, such as `multidrizzle_4ptdither.par'.
Now, open a new command file in your favorite editor and add a few lines to it so that it can be executed from the shell, your newly edited file should look something like this:#!/usr/bin/env python #Load up the necessary software modules import pyraf from pyraf import iraf from iraf import stsdas,dither #copy the saved version of your multidrizzle parameter #file to the uparm directory #the full path might look something like #/Users/you/iraf/uparm/ on a MAC #or /home/you/iraf/ on a linux or solaris machine. dither.multidrizzle(ParList='multidrizzle_4ptdither.par') #at this point, multidrizzle will run and verify #the input filenames and the output filename, #these should be defaulted to the values you #saved as part of your parameter editing session. #you can also edit the command above to include #the input and output names
- if you would like to save everything into one file, you can also copy the default MultiDrizzle parameters into your command script. If you are only changing a few things, the easiest way to do this is add the line:iraf.unlearn("multidrizzle") dither.multidrizzle(input="j08*flt.fits",output="j08_driz.fits",skys ub="no") my_computer> chmod u+x name_of_file.py my_computer> name_of_file.py
These are some additional ways in which a simple file like this can be very useful:
- If you have a science program that's taking observations over several months. If you have the data automatically delivered to a local directory whenever its available, you can set up a
crontab jobthat executes your MultiDrizzle script every so often, giving you the most recent drizzled version of the full dataset.
- You can distribute copies of your batch file to colleagues and students for their use, this means you need to transfer fewer files around, only the original calibrated science files need to be archived, saving everyone disk space.
- You can make multiple script files that cover the different science cases or field morphology that you commonly encounter
Space Telescope Science Institute
Voice: (410) 338-1082