The gpi_pipeline_primitives.xml file in the pipeline/config directory contains an index list to all available primitives of data processing. This provides the translation between human-friendly primitive descriptions (“Assemble Spectral Datacube”) and IDL function names (extractcube), and lists the available parameters for each primitive. To add a new primitive to the pipeline, you need to
The best course of action for creating a new module is to use an existing one as a template.
There is a file __template.pro in the primitives directory for this purpose. You should make a copy of that and edit it to create any new primitive.
This file includes example code and documentation comments demonstrating how to do typical tasks such as accessing data and header arrays, retrieving arguments passed to primitives, and logging output.
Pay particular attention to the way data is passed in and out with pointers in the dataset structure, and to the common blocks of code __start_primitive.pro and __end_primitive.pro that get invoked at the start and end of each primitive to provide common functionality.
Also note the structured comments in the file header (e.g. “PIPELINE ORDER: 2.0” and other comment lines starting with “PIPELINE”). These are what get parsed to create the index file for the primitives, gpi_pipeline_primitives.xml. You will need to edit these comments to define what your new primitive is called, what arguments it takes, in what position in the sequence the GUIs will suggest ordering it, etc. For instance, to define parameters for your modules, add “PIPELINE ARGUMENT” lines here, most easily by copying the lines from the template and editing as needed.
All primitives receive three inputs when called: a dataset array containing pointers to the current dataset (loaded FITS files being processed), a modules list containing the recipe’s primitives and their arguments in order, and a handle to the pipeline backbone object.
Since __start_primitive loads two common blocks (PIP and APP_CONSTANTS) that are defined in the backbone, you will get an error when trying to compile your primitive unless the backbone has already been compiled (.comp gpipipelinebackbone__define.pro)
Common arguments There are a few generically-handled primitive arguments like ‘gpitv’ and ‘save’ that you will always have in a primitive. These are handled either by the __end_primitive code block, or by the pipeline backbone itself in the case of the ‘skip’ keyword.
- gpitv: Send the output image or cube to gpitv
- save: If set to true (1), save the image or cube to disk
- stopidl: If present and set to true, after the primitive is complete IDL will stop at the command line debugger for testing purposes. (This only works if you are running from source, not the compiled code)
- skip: If present and set to true, the primitive will be skipped entirely. This is useful if you want to temporarily disable some step without deleting the line entirely from a recipe.
File naming convention By convention, all primitive filenames should be named starting with “gpi_” and the filename should be exactly consistent with the descriptive name chosen for that primitive, but with spaces converted to underscores. That is, if you have a primitive named “Write My Paper”, it should be named gpi_write_my_paper.pro.
Adding to svn If/when you are adding your new primitive into the GPI DRP subversion repository, after you have done svn add gpi_myfilename.pro, please also do svn propset svn:keywords Id gpi_myfilename.pro. This sets a subversion metadata flag property so that the Id comment line in the primitive will be updated automatically with every new revision. This is used by the pipeline to write primitive revision numbers into the HISTORY headers.
The XML index file is generated by runing the make_primitives_config.pro procedure while in the pipeline directory. This will automatically update the gpi_pipeline_primitives.xml based on the supplied header information in all available modules. You may then restart the DRP and other IFS software to take advantage of your new modules.
There is a button in the pipeline Status Console to “Rescan DRP Configuration”. If you press this, it will automatically run make_primitives_config, load the new configuration into the DRP, and recompile all IDL code (assuming you’re not running in IDL Runtime mode from a compiled code!). So, if you’re editing a pipeline routine or adding a new one, this is the best way to make that updated code immediately available for execution in a currently-active pipeline session.
The pipeline backbone provides some useful function calls for use in primitives. See the __template.pro file for more example code.
Add some message to the pipeline log: This will be displayed on screen and written to the log file:
backbone->Log, " Combining datacube using method="+method
Add a keyword value to a header. This will automatically be added to either the Primary or Extension HDU as appropriate, in accordance with the keyword lists defined in the GPI-to-Gemini software ICD.
The same function call works to write HISTORY keywords as well:
backbone->set_keyword,'HISTORY',functionname+": loaded file "+c_File
As discussed elsewhere, the primitive “Accumulate Images” is used to gather multiple files together and then combine them. These are stored using either disk files or arrays in memory. To access the saved files, use the accumulate_getimage function call.
Here is some example code, basically a simplified version derived from the Combine 2D Images primitive:
When running in GUI mode, the pipeline provides progress indications via two statusbars in the status console - ‘Current Recipe Completion Status’ and ‘Current FITS File Completion Status’. These are updated directly by function gpistatusconsole::set_percent, which is usually called by gpistatsuconsole::update. gpistatusconsole::set_percent takes two arguments (percent values for the two statusbars between 0 and 100), either of which is ignored when set to -1, and an optional /append flag, which causes the input values to be added to the current percentages rather than overwriting them. gpistatusconsolve::update sets the two percentages as:
100d*(double(filenum) + double(indexModules)/double(N_ELEMENTS(Modules)))/double(nbtotfile) and 100d*double(indexModules)/double(N_ELEMENTS(Modules))
where filenum is the index of the FITS file currently being processed, nbtotfile is the total number of files in the current recipe, and indexModules is the index of the module currently being run. gpistatusconsolve::update is called from gpiPipelineBackbone::Reduce before every call to gpiPipelineBackbone::RunModule, so that the statusbars get updated between executions of separate primitives.
Occasionally, a primitive will have a sufficiently long execution time that it makes sense to update the statsubars while it is executing to indicate to the user that the pipeline has not stalled or silently crashed. This works particularly well in primitives that execute long-running, fixed-size loops.
The status console object can be retrieved from within the primitive scope as:
statuswindow = backbone->getstatusconsole()
You can then update the FITS File Completion status bar from within the primitive’s main loop as:
FOR loopvar = 0,numIterations-1 DO BEGIN ... statuswindow->set_percent,-1,1d/numIterations*100d/double(N_ELEMENTS(Modules)),/append ENDFOR
This has the effect of incrementally filling in the FITS File Completion status bar while the primitive is executing, while leaving the Recipe Completion status bar static.
If your primitive contains multiple separate loops, you can add multiple calls to update the status bars, but it is up to you to figure out the proper values for incrementing. If you increment the progress on either bar to greater than 100%, no error will occur, but the status bars will be completely filled and will appear static to the user. For an example of this kind of multiple loop implementation, see the Wavelength Solution primitive.