This is a quick tutorial to help new users get familiar with the GPI pipeline. More detailed documentation of the tools shown here can be found on subsequent pages.
As an introduction to reducing data with the GPI Pipeline, a simple set of data is available on the Canadian Astronomy Data Center (CADC). If it is the first time you visit CADC, you will need to register. Once completed, go into the gpi directory and download the GettingStarted_tutorial_dataset folder. This contains a small set of data to give an overview of the different types of image and how to process them. All the files are raw data coming from the detector and we will reduce them one at a time.
It is important that the user follow the tutorial in the displayed order. Skipping steps will result in reduction errors (missing calibrations etc).
It is assumed you have successfully launched the pipeline following the previous sections.(If not, see the GPI Data Pipeline Installation & Configuration manual.) Therefore, you should have the two IDL sessions opened with the GPI launcher GUI and the GPI DRP status console below. See Starting the GPI Data Pipeline and Starting the GPI Data Pipeline. The GPI pipeline console should indicate something like:
| Now polling for DRF files in /Users/yourusername/somedirectory/GPI/queue/
The Launcher is a little menu window that acts as the starting point for launching other tools. In this getting started tutorial, we will mostly use GPItv and the Recipe Editor tools.
Before reducing any files, the best is probably to take a look at the raw data we have using GPItv. Click on the GPItv button on the GPI launcher to open it (See GPItv Viewer User’s Guide for more details):
- Then File->Browse Files...,
- In the new window push the button Change... and select a folder in the GettingStarted_tutorial_dataset.
- The .fits files list should appear.
- As you select one file or another, the GPItv window should refresh and plot the new image.
- Use the GPItv menu File->View FITS headers... to get detailed information for each image.
- Click on the image to center the view on a pixel. Adapt the zoom with the buttons.
|||The reason for these odd exposure times is that GPI IFS exposures are quantized in units of the readout time for the detector, 1.45479 seconds. Because of this quantization, in practice one typically just rounds the durations, so these would be e.g. “60” and “120” second exposures - there’s no need to carry around all the significant figures.|
Let’s first give the general method to reduce any file. This will then be applied in the next sections for different particular cases. Only the selected items in the different option lists will change.
Press the Recipe Editor button in the GPI Launcher window and the window below will open.
The principle of the pipeline is based on recipes to reduce files. A recipe includes a list of input files (the ingredients) and a list of primitives to be applied on those files (the actions). Each primitive is an elementary algorithm to be applied on files listed in the recipe. The action can be anything, for instance subtract dark frame or build data cube. There are two kinds of primitives: ones that should be applied on each file and ones that are applied on all files together. For instance, Subtract Dark acts on one file at a time, while Combine 2D images will merge all the files from the list resulting in a single output file. The special primitive Accumulate Images divides the two categories of primitives. All the primitives before are applied to each file, then Accumulate Images gathers up the results, and any primitives after are applied to the entire set.
The numbers of each of the following steps match with the screenshot above.
In the following, these steps will be repeated several times with specific indications.
For every reduction, a gpitv window will open with the result of the reduction and the file will be saved in the reduction folder defined when installing the pipeline. If you don’t want to plot or to save the results, you can change the parameters Save and gpitv of the primitives. To change parameters, select the primitive in the upper right table. Then, its parameters will appear in the bottom right table. Select the value of the parameter and type what ever is asked. Finally, press enter to validate the input.
The recipe templates often only work in a particular context, meaning that if you try to apply one of them to a random file it probably won’t work and the pipeline may crash. It is because the primitives are yet not very robust and they need more or less exactly the inputs they were designed for.
The dark calibration files for a given integration time can be combined using these amendments to the Recipe Editor usage steps above:
The 60s darks correspond to the science data and will be used in the following section.
The selected primitives are then:
The GPI DRP Status Console will display a progress bar and log messages while reducing the files.
When reducing calibration files the result is automatically saved in the Calibrations folder. The path to this folder was defined when installing the pipeline and should normally be in the reduced folder (See Configuring the Pipeline $GPI_REDUCED_DATA/calibrations).
The pipeline will look for calibration files automatically by reading the text file GPI_Calibs_DB.txt in the calibration folder (see The Calibration Database). There is a button at the bottom of the GPI DRP Status Console called Rescan Calib. DB to create or refresh this text file.
Use the button Remove All to remove all the selected files then Redo this with the 120s integration times which corresponds to S20131208S0021(-22).fits. This newly created dark frame will be used to reduce the wavelength calibrations in the next section.
Like the dark frames, the wavelength solution calibration files can be created using the Recipe Editor reduction steps with the following additions:
If you did not correctly copy in the files from the files_to_go_into_calibrations_directory then you will get warnings but it should work anyway. How to create such files is described in the Step-by-Step reduction documentation
A sample of the 2D image with the computed calibration is given below. The green lines are the locations of the individual lenslet spectra. The coordinates of the lenslets are stored in a .fits file cube in the calibrations folder. Use GPItv to take a look to the result.
The following is an example of how to reduce science data. - For step 1) Select your science data S20131210S0025.fits. - For step 3) Select the SpectralScience reduction type. - For step 4) Choose the Quicklook Automatic Datacube Extraction Recipe template.
All the calibration files are automatically found and the result is a final data cube. The result should be plotted in GPItv at the end of the reduction. Feel free to look at the different wavelengths by changing the selected slice. Note that this example does not account for the flexure offsets between the wavelength calibration derived above, and the current spectral positions, therefore the reduced cube will be rather ugly and have a large Moire pattern in the data.
In order to correct for this, we must account for the offsets. If one opens the raw image in GPItv, then overplots the wavelength soluution (Labels -> Get Wavecal from DB, then Labels -> Plot Wavecal Grid -> Draw Grid), one will see the large offets (shown below).
As a rough approximation, one can input offsets in GPItv (in the plot wavecal grid) until the overlap looks correct (note that old drawings of the wavecal can be erased by Labels -> Erase All). An (X,Y) shift of (-2,1) is a reasonably good guess. The user can then input these offsets into the ‘Update Spot Shifts for Flexure’ primitive. To do this:
Because a snapshot of the Argon arclamp was taken at the same telescope position, we can use this to determine the needed offsets in a much more robust fashion.
This primitive will use every 20th lenslet in the frame to calculate the net shift from the desired wavelength calibration. One must be careful to ensure the proper wavelength calibration is grabbed from the database (check the output in the pipeline xterm). If the wrong one is selected, then you can manually choose the correct one (S20131210S0055_H_wavecal.fits) using the Choose Calibration File button. A new wavecal (S20131210S0055_H_wavecal.fits) will then be added to the database, which is merely the old wavecal with new x-y spectral positions.
If you now repeat the reduction of the science data from above, the new wavecal will be captured and the datacube will appear as follows. Remember to set the “Update spots shifts for Flexure” correction to ‘none’
Enjoy the first of many data cubes!