6.5. Recipe for Photometer Large Map and Parallel Mode


6.5.1. Large Map and Parallel Mode User Pipeline Script Prerequisites

The Large Map mode is essentially the same as the SPIRE component of the Parallel Mode - for both modes, this processing guide will allow you to reprocess your data. For this data reprocessing example, we assume that you wish to reprocess your data starting from Level 0.5 products. For this data reprocessing example, we will be using the Large Map observation (obsID: 1342183475) of NGC 5315. We will in this example assume that you have received the engineering pipeline processed Level 0.5 data products from the HSC, and have stored them in a storage pool "OD117-ScanNGC5315-1342183475", either by a direct download or through HIPE.

You can access the Photometer Large Map User Pipeline processing script by clicking on 'Pipeline' on the top bar within HIPE, selecting 'SPIRE' and then clicking on 'Photometer Large Map User Pipeline' - the script will open up in the Editor window within HIPE.

Selecting the Photometer Large Map User Pipeline pipeline script

Figure 6.33. Selecting the Photometer Large Map User Pipeline pipeline script User Pipeline Inputs

The User Pipeline Script allows us to process our data from Level 0.5 to Level 1. The pipeline script requires some editing before it can be executed. The pipeline requires an Observation ID (in either Hexadecimal or decimal), Data Pool and an output directory to write plots and FITS files from the pipeline similar to the examples provided in the script below;.

myObsid    =  1342183475   #  0x50001833 in hexadecimal
myDataPool = "OD117-ScanNGC5315-1342183475"
outDir     = "/Users/cpearson/jython/localstore/plots/"

In addition, there are up to 6 optional parameters that may be set. This are described in the pipeline script (see code excerpt below) and are the choice of whether to include turnaround data at the ends of the scan lines for increased map coverage (includeTurnaround Boolean parameter), whether to create maps absolute calibrated for extended emission (MJy/sr) (makeExtendMaps Boolean parameter), the choice of baseline subraction (the default is the destriper or alternatively a median baseline subtraction) and flags to turn on the optional "Bolometer Jump" and "Cooler Burp" correction (See Section

# Additional Options
# (D) includeTurnaround: Include the scan line turnarounds in the pocessing and mapmaking
# (E) makeExtendMaps: Create absolute calibrated maps in MJy/sr
# (F) baselineSubtraction: Subtract a baseline from each scan to avoid stripes
# (G) destriper: Determine and remove baselines to achive an optimum fit between all timelines
# (H) jumpDetection: Detect bolometer jump with destriper and flagged scan are not used in map making process
#  **   At least one of the options F or G must be True.
# (I) coolerBurpCorrection: Search and correct cooler burp.
includeTurnaround          = False
makeExtendMaps             = False
useBaselineSubtraction     = False
useDestriper               = True
bolometerJumpDetection     = False
coolerBurpCorrection       = False


Note that the default output from the User Pipleine are maps as FITS files saved in the user defined outDir path. It also possible to write the entire observation back to the original or a new Pool. To do so, the following line at the end of the script must be uncommented and an appropriate Pool name inserted.

#saveObservation(obs,poolName="enter-a-poolname",saveCalTree=True) Level 0.5 to Level 1 Processing

The Large (Parallel Mode) Map pipeline is shown in the flowchart in Figure 6.34. The flowchart covers the processing steps from Level 0.5 to the creation of the Level 2 maps. Note that as expained previously in Table 6.1, the pipeline produces Point Source Calibrated maps in Janskys/beam. In addition by setting the makeExtendMaps=True, maps absolute calibrated for extended emission (using the Planck data to calibrate) can also be produced. Thethe standard (SPG) archive processing produces by default up to three additional maps and diagnostic products. Maps for extended emission, with absolute calibration derived from the Planck all-sky maps (in MJy/Sr) appear in the Level 2 context in the form of extdPxW. Maps specifically for Solar System 'moving' objects (SSO) calibrated in Jy/beam appear in the Level 2 context in the form of ssoPxW (where PxW corresponds to PSW, PMW or PLW respectively). These SSO maps differ from the normal point source, psrcPxW, maps in that the SSO maps are motion corrected and centred on the moving object frame (see Section 6.12.1). In certain cases super resolution maps are also created for appropriate observations (see Section 6.11.5).

Note that Destriping using 2nd Level Deglitching should not be performed on observations of SSOs, before the SSO motion correction has been applied, as this will result in misidentifying source samples as glitches. This is checked by default in the SPG pipeline processing at ESAC.

The actual number of explicit pipeline steps required to process a Large Map (& Parallel) mode observation are relatively few, as outlined in the schematic overview of the pipeline script shown in Figure 6.35 (and the essence of these steps is exactly the same for the Small Map pipeline described in Section 6.6.1 ). The pipeline from Level 0.5 to Level 1 processes data by looping over the scan lines to start building up the map. Each step in the pipeline applies some correction to the data, to produce the final Level 1 pipeline product, as flux calibrated timelines with associated positions.The pipeline works on a Photometer Detector Timeline (PDT) but also requires additional data such as the Nominal Housekeeping Timeline (NHKT) and other auxiliary products for the telescope pointing information.

The SPIRE Photometer Large Map pipeline.

Figure 6.34. The SPIRE Photometer Large Map pipeline.

The Essence of the SPIRE Photometer mapping Pipeline (applicable to Large Map, Parallel and Small Map) Script.

Figure 6.35. The Essence of the SPIRE Photometer mapping Pipeline (applicable to Large Map, Parallel and Small Map) Script.

In the following section the processing steps are broken up into their constituent parts. The first step in the pipeline is to join all the scan legs and turnarounds together. The timelines are joined together to avoid ringing effects caused by pipeline modules involving corrections using Fourier domain filtering.

    # (1) join all scan legs and turnarounds together
    bbCount=bbid & 0xFFFF
    if bbCount >1:
        if pdtLead != None and pdtLead.sampleTime[-1] < pdt.sampleTime[0]-3.0:
    if bbid < MAX(Long1d(bbids)):
        if pdtTrail != None and pdtTrail.sampleTime[0] > pdt.sampleTime[-1]+3.0:

The next 2 steps in the pipeline produce the pointing information for the observation. The SPIRE Beam Steering Mechanism is a movable mirror used for jiggling observations and although not used for mapping observations a constant offset must still be taken into account. In order to reconstruct the SPIRE Pointing Product, the BSM Angles Timeline (BAT) has to be added to the Detector Angular Offsets on the array, and to the SIAM which relates the SPIRE instrument to the Herschel Pointing Product (HPP) itself.

    # -----------------------------------------------------------
    # (2) Convert BSM timeline to angles on sky (constant for scan map)
    # -----------------------------------------------------------
    # (3) Create the Spire Pointing Product for this observation

The next step is the Electrical Crosstalk Correction, which corrects for crosstalk between the Thermistor-bolometer channels only (bolometer-bolometer crosstalk is negligible)

     # -----------------------------------------------------------
     # (4) Run the Electrical Crosstalk Correction on the timeline data

The next step is the Signal Jump Detector, which detects jumps in the Thermistor timelines that occasionally occur, leading to map artifacts introduced in the temperature drift correction stage later in the pipeline.

 # -----------------------------------------------------------
    # (5) Detect jumps in the Thermistor timelines that occasionally occur,
    # leading to map artefacts introduced in the temperature drift correction
    # Also requires the Temperature Drift Correct Calibration File.
    if pdt.meta["biasMode"].value == "nominal":
        pdt=signalJumpDetector(pdt,tempDriftCorr=tempDriftCorr, kappa=2.0,\
            gamma=6.0, gapWidth=1.0,windowWidth=40.0, \

This module subtracts baselines and smoothed medians from the Thermistor timelines to identify any jumps. Any jumps are flagged using the maskJumpThermistorsDarksSignal mask bit. The current recommended values for all parameters are included in the call to the module (specific details can be found in the SPIRE Pipeline Specification Manual (PSM)). The module requires the Temperature Drift Correction calibration file. Note that the appropriate version of this calibration file is only available with HIPE version 8 onwards. If an earlier version of HIPE is used then an older evrsion of the Signal Jump Detector is automatically used. For observations of bright sources, the use of this module is not encouraged because of the very low probability of Signal Jumps in the timelines of Dark Pixel channels used by the Temperature Drift Correction module for bright mode observations, therefore at present this step of the pipeline is skipped for observations in bright source mode.

Cosmic ray rejection (deglitching) is one of the most challenging data analysis problems for SPIRE, as artifacts, caused by undetected glitches, limit the calibration accuracy and sensitivity, and directly influence the quality of the final data products. The SPIRE pipeline incorporates a two stage deglitching process described below. The first step is to run the Concurrent Deglitcher on the timeline data - this is to detect and remove glitches that occur simultaneously in groups of connected bolometer detectors due to a cosmic ray hitting the substrate of a photometer array. An extreme example of the performance of the Concurrent Deglitcher is shown in Figure 6.36, where the right panel shows the effect of not running the Concurrent Deglitcher on the timeline data. The imprint of the array is clearly seen on the final map.

    # (6) Run the concurrent deglitcher on the timeline data
    pdt=concurrentGlitchDeglitcher(pdt,chanNum=chanNum,kappa=2.0, size = 15, \
     correctGlitches = True)

For each detector, a running median of the signal is computed. The size parameter is the half-size of the window on which the median is computed and the Kappa parrameter is a measurement of the glitch detection threshold. The number of samples flagged and replaced depends on the strength of the glitch and the corresponding GLITCH_FIRST_LEVEL flag is raised. Note that the default is for the detected glitches to be reconstructed, however if just detection and flagging is required the reconstruction can be switched off by setting correctGlitches = False. For more information on the concurrent deglitcher, see the respective section of the Pipeline Specification Manual.

Prerformance of the Concurrent Deglitcher.

Figure 6.36. Prerformance of the Concurrent Deglitcher.

The second step in the deglitching (referred to as First Level Deglitching) is to run the Wavelet Deglitcher on the timeline data. This module employs a complex algorithim using wavelet transformation in Fourier space and For an in depth explanation of the parameters that one can pass to the Wavelet Deglitcher, please refer to the relevant entry in the SPIRE Pipeline Specification Manual:

  # (7) Run the wavelet deglitcher on the timeline data         
  pdt=waveletDeglitcher(pdt, scaleMin=1.0, scaleMax=8.0, \
      scaleInterval=7, holderMin=-3.0, holderMax=-0.3, \
      correlationThreshold=0.3, optionReconstruction='linearAdaptive20',\
      reconPointsBefore=1, reconPointsAfter=3)

Note that the call is slightly different for Parallel Mode observations due to fine tuning to obtain the optimal set of parameters;

  # (7) Run the wavelet deglitcher on the timeline data         
  pdt=waveletDeglitcher(pdt, scaleMin=1.0, scaleMax=8.0, \
        scaleInterval=5, holderMin=-1.9,holderMax=-0.3, \
        correlationThreshold=0.69, optionReconstruction='linearAdaptive20',\
        reconPointsBefore=1, reconPointsAfter=3)


There is one further optional deglitching module that can be included in the pipeline processing replacing the Wavelet Deglitcher. The alternative Sigma-Kappa Deglitcher can be used if problems are found using the standard pipeline deglitching. The deglitcher is present but by default commented-out in the pipeline. Details of this module are found in the SPIRE Pipeline Specification Manual

Next, the Low Pass Filter Response Correction is applied to the detector timelines. The electronics impose a delay on the data with respect to the telescope position along the scan and this effect must be taken into account to ensure that the astrometric pointing timeline is properly matched to the detector data timeline. The call to the module is;

    # (8) Apply the Low Pass Filter response correction

Note that to reduce ringing effects in the timeline data (caused by the pipeline modules using Fourier domain filtering), all the individual timelines are concatonated together with the turnarounds between scan lines, at the beginning of the pipeline. However, the last scanline of an observation is processed without turnaround data. Therefore this last scan line is prone to show edge ringing effects from the two Fourier pipeline modules (Low Pass Filter Response and Bolometer Time Response correction) run in the Level 0.5 to Level 1 processing. This effect is most prominent in timelines with a large slope. Although the overall effect of this 'feature' on the map is almost negligible, it will effect pixels in the maps hit by the last samples (although these pixels are located at the border, which is well outside the user-requested area). The edge ringing effects can be overcome by a method that mirrors the signal in the timelines effectively removing any slope from the data. In Figure 6.37 the effect of mirroring the timelines to remove edge ringing is shown. Large maps processed with and without the mirroring option are shown along with the corresponding difference map. Several artefacts in the difference map can be seen. The speckled noise pattern at the edges of the scans (single pixel, white and black points) is due to one of the maps having high frequency ringing at the edges. The other effect is the array imprints that are visible as smudges, which are undetected common glitches. These imprints are more common at the edges of the scan where the lack of ringing in the second map, affects the final (post temp-drift correction) slope of the timelines. Also shown in Figure 6.37 are example plots of the final scan line timeline data showing the ringing in the first panel, a comparison of the scan line with and without the mirroring and the difference between the unmirrored and mirrored timelines respectively.

Edge ringing in the final scan line.

Figure 6.37. Edge ringing in the final scan line.

Timeline mirroring can be turned on in the pipeline by supplying an additional option to the lpfResponseCorrection task;


The ringing is more severe, for larger differences between the points at edges of the timeline. Such large differences are more likely with longer scan lengths (i.e. more likely to be seen in Parallel Mode observations) and also are more likely to be observed at times were the bath temperature changes rapidly (for example during a cooler burp). For most Scan Map observations the default method should suffice. The cases where the mirror method would be advantageous include: (1) Slopes in the timeline due to real structure in the field. For example a Small Scan Map with cirrus emission on one side of the map. The timelines of such a field would have a slope and correcting them with the default method would create ringing. (2) Slopes in timeline due to bath temperature drift. In this case, the longer the scan-length, the larger the difference between the start and the end of the timeline will be. Such bath temperature drifts are common during the first few hours after a cooler recycle. Note that in general, using the mirror option will always produce slightly better results, however the corresponding time to run this module will be doubled.

The next step is to apply the Flux Conversion to the detector timelines to convert the voltage timelines into timelines in the units of Jy/beam. A calibration file containing the appropriate Astronomical Unit Conversion Table for either normal or bright source mode is used;

     # (9) Apply the flux conversion 

The next step is optional and set by the coolerBurpCorrection = True flag at the beginning of the User Pipeline Script to deal with artefacts in maps caused when a "cooler burp" occurs. During the cooler burp the drifts of bolometers are unusual and for this reason the temperature drift correction leaves residuals. Cooler burps are detected by checking for large variations in the subK Temperature parameter in the SPIRE Housekeeping data. If this variation is above some threshold then a flag is set in the meta data. This flag is checked at the Temperature Drift Correction stage of the pipeline (see Section for details);

      # (10) If coolerBurpCorrection flag = True then update timeline meta data
      pdt.meta["coolBurpDetect"]=BooleanParameter(coolerBurpCorrection, \
              "Indicates a cooler burp") 	     

The next step, the Temperature Drift Correction, removes low frequency noise, caused by variations of the detector array bath temperature, from the timelines. A correction timeline is generated for each detector using data and calibration information for that detector. This is then subtracted from that detector’s signal timeline. The module requires the Temperature Drift Correction calibration file used earleir in the Signal Jump Detector module. Note that if the coolerBurpFound flag is set then the Temperature Drift Correction task will apply additional multiplicative factors to its coefficients to correct for the cooler burp artefacts in the map;

    # (11) Make the temperature drift correction

The next step is the Bolometer Time Response Correction which corrects for any additional low-level, slow response in the SPIRE bolometers. This is accomplished by multplying the signal in Fourier space by an appropriate transfer function, requiring a calibration file containing the corresponding detector time constants;

    # (12) Apply the bolometer time response correction

The next step in the Level 0.5 to Level 1 processing is to Cut The Timelines back into individual scan lines. However, if you want to include turnaround data in map making, the includeTurnaround parameter should be set to TRUE at the beginning of the pipeline script.

    # (13) Cut the timeline back into individual scan lines.

The penultimate step in the pipeline is to attach positional information (RA, Dec) to the data timelines, to create the Photometer Scan Product (PSP). The Associate Sky Position module attaches the sky position timeline onto the detector timeline by querying the SPIRE Pointing Product that contains within it the necessary products to determine the positions of the SPIRE detectors on the sky, the Herschel Pointing Product, SIAM Product, the BSM Angles Timeline and the Detector Angular Offset table (all described above at the beginning of this pipeline section).

    # (14) Add pointing timelines to the data

Finally, we apply the Time Correlation to the PSP which corrects for any drifting of the clock that is on board the spacecraft:

    # (15) Apply the time correlation 

and we shall store our Photometer Scan Product in the Level 1 context, which brings to a close the processing of the individual scan lines (but not yet the end of the Level 0.5 to Level 1 processing);

    # Add Photometer Scan Product to Level 1 context

At this stage, the data is at the intermediate Level 0.7 stage. To complete the processing to Level 1, baseline removal (destriping) must be carried out on all the processed calibrated positional timelines.

Due to the large telescope background, fluxes measured by the SPIRE bolometers are effectively very small differences on top of a dominating offset that is usually several orders of magnitude larger. Due to variations in the thermal and electronic stability of the system, residual offsets in the flux calibration from one detector to another are observed, resulting in striping in the final maps. The default algorithm used by the User Pipeline to remove such striping is an Iterative Destriping Algorithm that iteratively updates offsets in the timelines until an optimal solution is found. An alternative option is the Median Baseline Removal algorithm, which subtracts the median from each detector in the Level 1 timelines and then creates a map. Users are encouraged to read Section 6.8.1 for details on the Destriper and other various baseline removal methods. The two methods in the pipeline can be inter-changed by use of the Boolean flags at the beginning of the pipeline;

        useBaselineSubtraction = False
        useDestriper = True

The Destriper module takes as input a SPIRE Level 1 context (scans) and outputs a list of destriped scan lines as input to the mapping algoritm. Note that the Destriper also produces maps itself using the naive mapper, and these can be accessed via the variable maps below. Please see Section 6.8.1 for details of the Destriper parameters.

The Destriper also supports 2nd Level deglitching via outlier rejection, however, this feature is currently switched off by default by the parameter l2DeglitchRepeat=0. To perform the level 2 deglitching, the destriper creates an additional temporary map of Medians, i.e. each pixel contains the median value of all unmasked (valid) readouts that fall within the borders of that skybin, and a map of Median Absolute Deviations (MAD), i.e. each pixel contains the MAD of all unmasked (valid) readouts that fall within the borders of that skybin. Glitches are identified as outliers between the median map and the MAD map times some threshold (kappa). Glitches are then masked and the new map created by the naive mapping process. In the case of small number statistics, i.e. when few readouts are in a skybin, the threshold provided by the MAD map may become unrealistically low. To alleviate this problem, a second parameter (kappa2) is allowed, as a lower threshold for 2nd level deglitching. To activate the 2nd Level Deglitching the following parameters in the destriper call should be set (to the recommended values); l2DeglitchRepeat = 100, l2IterMax = 1, kappa=7, kappa2=7 (see Section 6.8.1 for more details).

Note that for very large maps the destriper requires more than 7Gb of memory which may cause memory problems on smaller machines. Memory problems can be overcome by setting the useSink=True parameter in the Destriper call below (Default is useSink=False). Note also that the destriper removes the relative offsets between individual scan lines which means that negative fluxes can still exist in the final maps. For absolute flux calibrated maps, the User should use the Extended Emission Maps processed using the Planck All-Sky Maps to produce an absolute zero-point (see Section 6.1).

###                      Destriping                                     ###
if useDestriper:
    print "Destriper Run for OBSID= %i, (0x%x)"%(myObsid,myObsid)
    arrays = ["PSW","PMW","PLW"]
    pixelSize = [6,10,14]  #Map pixel size in arcsec for PSW, PMW, PLW respectively
    maps = []
    if bolometerJumpDetection:

    # Using Level 1 context. Run destriper as an input to the map making
    for iArray in range(len(arrays)):
    scans,map,diag,p4,p5 = destriper(level1=scans,\
    pixelSize=pixelSize[iArray], offsetFunction='perScan',\
    array=arrays[iArray], polyDegree=0, kappa=20.0, iterThresh=1.0E-10,\
    l2DeglitchRepeat=0, iterMax=100, l2IterMax=5, \
    nThreads=2, jumpThresh=jumpThresh, jumpIter=15,\
    withMedianCorrected=True, brightSource=True, useSink=False, storeTod=False)
    # Save diagnostic product
    if obs.level2.refs['pdd'+arrays[iArray]]!=None:
    obs.level2.setProduct('psrc'+arrays[iArray]+'diag', diag)
    # Keep destriper maps for inspection
    print "Finished the Destriper Run for OBSID= %i, (0x%x)"%(myObsid,myObsid)
###                   Finished Destriping                               ###
########################################################################### Level 1 to Level 2 Processing (Mapmaking)

The standard mapping algorithm in the User Pipeline is the Naive Mapper. The Naive Mapper data simply projects the full power seen by a detector onto the nearest sky map pixel. For each bolometer timeline at each time step, the signal measurement is added to the total signal map, the square of the signal is added to the total signal squared map, and 1 is added into the coverage map. After all bolometer signals have been mapped, the total signal map is divided by the coverage map to produce a flux density map, and the standard deviations are calculated using the total signal, total signal squared, and coverage map.

print 'Starting Naive Map maker'
mapPlw=naiveScanMapper(scans, array="PLW", method=UnweightedVariance)
mapPmw=naiveScanMapper(scans, array="PMW", method=UnweightedVariance)
mapPsw=naiveScanMapper(scans, array="PSW", method=UnweightedVariance)

Although the current recommendation is the Naive Mapper. The alternative is the Madmap algorithm which is a maximum-likelihood based method of estimating a final sky map from the input data. The Madmap algorithm can be run with the following code but requires the Channel Noise Table Calibration Product

Mad Map Map maker (requires Channel Noise Table Calibration Product)
print 'Starting Mad Mapper'
mapPlw=madScanMapper(scans, array="PLW",chanNoise=chanNoise)
mapPmw=madScanMapper(scans, array="PMW",chanNoise=chanNoise)
mapPsw=madScanMapper(scans, array="PSW",chanNoise=chanNoise)        
Error Maps

Several options for the error map associated with the naive mapmaker are availible. The mathematical formulae for all the error maps below are explained in the SPIRE Pipeline Specification Manual (PSM). The default error map UnweightedVariance treats all unflagged detectors equally in averages (no weighting) and the error map gives the standard deviation of the mean of samples falling in a map pixel. Therefore, the unweighted averages fall short of achieving maximum possible S/N. An additional error map is provided by the following alternative call and requires the Channel Noise Table Calibration Product since the errors are weighted by the individual bolometer noise measurements.

map = naiveScanMapper(scans, array=array, method=WeightedVariance, chanNoise=chanNoise)

For areas on maps with very low coverage the errors may not be accurate using the weighted method. Therefore a third Hybrid error map is avaiable which uses the above weighted error map, except at very low sample counts. For these low-sample pixels, the observed signal values are not used in estimating the error flux - instead it is calculated from the white-noise only;

      n = 5   # Hybrid error map low sample threshold
      map = naiveScanMapper(scans, array=array, method=WeightedVariance, \
           chanNoise=chanNoise, hybridMap = True, hybridThreshold = n)

Median Maps

There is also the option to produce a Median Map with the Naive Mapmaker using the MedianMap keyword;

     mapPswMedian = naiveScanMapper(scans, array=array, method=MedianMap)

With this keyword set, the Naive Mapmaker will produce the following:

  • A map of medians, i.e. each pixel contains the median value of all unmasked (valid) readouts that fall within the borders of that skybin.
  • A map of median absolute deviations (MAD), i.e. each pixel contains the MAD of all unmasked (valid) readouts that fall within the borders of that skybin.
  • A map of coverage, i.e. each pixel contains the number of all unmasked (valid) readouts that fall within the borders of that skybin.

The output is equivalent to the default Naive Mapper output, except that the signal map is replaced by the median map and the error map is replaced by the median absolute deviation map (such median maps are required (and produced internally) during the 2nd level deglitching process for outlier rejection).

Turnaround Data

If the includeTurnaround = True flag has been set at the beginning of the pipeline then the final map will include the turnarounds at the end of the scan line legs. In addition, it is possible to further control the amount of turnaround data included by setting optional keywords in the naive mapmaker. The User may specify a maximum and minimum velocity (in arcsec/s) to be included as shown below. The recommended minimum velocity is 5 arcsec/s and this is the default.

map = naiveScanMapper(scans, array="PSW", minVel=10, maxVel=100)

The mapper excludes samples from entering the map according to their mask. Depending on the mask there are two different approaches implemented.

  • The DEAD, SLOW and NOISY mask bit is checked only once at the start of the scan line for each bolometer. If set, the mapper will ignore all the samples in the scanline for this bolometer, i.e. they will not end up in the map.

  • For the MASTER, GLITCH_FIRST_LEVEL_UNCORR, GLITCH_SECOND_LEVEL_UNCORR, SERENDIPITY and TRUNCATED_UNCORR masks the mapper checks for each sample individually. If the mask is set the sample will be ignored. For more details on SPIRE mask handling, see Section 8.4 of this manual.

Saving the Final Maps

The Final maps are written out as FITS files to the directory specified at the beginning of the pipeline script;

# -----------------------------------------------------------
# Save Maps to output directory
simpleFitsWriter(mapPsw, "%smapPSW_%i.fits"%(outDir, myObsid))
simpleFitsWriter(mapPmw, "%smapPMW_%i.fits"%(outDir, myObsid))
simpleFitsWriter(mapPlw, "%smapPLW_%i.fits"%(outDir, myObsid))

Finally, the Level 2 Context for the observation is updated with the maps (For post HIPE 10 processing the Point Source Calibrated maps are referred to using the psrc prefix);

# Update the Level 2 (map) Context in the Observation Context
obs.level2.setProduct("psrcPSW", mapPsw)
obs.level2.setProduct("psrcPMW", mapPmw)
obs.level2.setProduct("psrcPLW", mapPlw)

Note that it is also possible to save the entire Observation Context to either the same Pool or a new Pool by uncommenting the saveObservation command in the User Script and supplying a Pool name.


Three point source calbrated maps are each produced for PSW, PMW and PLW, and are visible through the Product Viewer by right-clicking on the required variable in the Variable pane and selecting 'Open With':

Selecting the Product Viewer

Figure 6.38. Selecting the Product Viewer

the actual map with fluxes (denoted as 'image', see Figure 6.39);

The NGC 5315 PSW Level 2 image map

Figure 6.39. The NGC 5315 PSW Level 2 image map

The statistical flux error map, calculated as the Standard Deviation using the total signal, total signal squared, and coverage map (denoted as 'error' see Figure 6.40):

The NGC 5315 PSW Level 2 error map

Figure 6.40. The NGC 5315 PSW Level 2 error map

and an image which shows the coverage map for our scans (denoted as 'coverage' see Figure 6.41):

The NGC 5315 PSW Level 2 coverage map

Figure 6.41. The NGC 5315 PSW Level 2 coverage map

We can export our images to FITS files by right clicking on the respective variable in the Variable pane (e.g. mapPsw), and selecting 'Send To -> FITS file'.

Congratulations! You have now re-processed your Large Map data all the way to the final Level 2 maps! Creating Absolute Calibrated Maps for Extended Emission

For observations where producing maps of extended emission is the most important objective,additional processing can be carried out to produce maps absolutely calibrated for extended emission in MJy/sr. The absolute calibration is made using the zero-point correction obtained by the Planck maps. The creattion of absolute calibrated maps using the Planck zero-point is described in detail in Section 6.10.1. In order to carry out the processing for extended emission maps, it is assumed that the Planck maps have already been downloaded and the correct properties have been set for the gains for the Planck HFI 545GHz and 857GHz bands respectively (see Section for details). Before the maps for extended emission are created the pipeline sets up the following environment for the Planck map zero-point correction:

#  Optionally create absolute calibrated SPIRE Maps for extended emission  #
#  Assumes Plank Maps have been downloaded and HIPE configurated           #
#  See the Useful Script "Photometer Map Zero Point Correction" for details#
if makeExtendMaps:
    # Planck Map Setup
    hfi545Map = Configuration.getProperty("spire.spg.hfi.545map")
    hfi857Map = Configuration.getProperty("spire.spg.hfi.857map")
    hfi545Gain = zeroPointCorrection["hfi545Gain"]  #  Planck HFI 545GHz channel gain
    hfi857Gain = zeroPointCorrection["hfi857Gain"]  #  Planck HFI 857Hz channel gain
    hfiFwhm    = zeroPointCorrection["hfiFwhm"]     #  Planck HFI FWHM
    # Prepare zero-point correction task
    level1 = obs.level1
    level2 = obs.level2
    level2ZeroPoint = MapContext()
    for key in level2.meta.keySet():
    level2ZeroPoint.meta[key] = level2.meta[key].copy()
    # Create new Level1Context
    scansZeroPoint = Level1Context()
    scansZeroPoint.meta = level1.meta

In order to produce maps calibrated for extended emission, an additional processing step; Extended emission gain correction is required for individual bolometers since the SPIRE calibration assumes uniform beams across the array, not taking into account variations among bolometers. For the creation of maps with extended emission, inclusion of both the scan turnaround data and the application of the relative gains is strongly recommended. These are implemened in the pipeline by setting the following paramters at the beginning of the pipeline;

        includeTurnaround = True
        makeExtendMaps    = True

Then the relative gains are applied (using the chanRelGains calibration product) as;

    ### Optionally apply Relative Bolometer Gains for extended emission ###
    if applyExtendedEmissionGains:
    print "Apply relative gains for bolometers for better extended maps"
    for i in range(level1.getCount()):
        psp = level1.getProduct(i)
        psp = applyRelativeGains(psp, chanRelGains)

Note that for observations processed earlier than HIPE version 8, an additional correction must be made to the data before the relative gains can be applied, due to a bug in earlier versions of the data products.

    if psp.type=='PPT':  psp.setType('PSP')

The simpler and strong recommendation is simply to re-process the observation with a version of HIPE of 8 or later with the User Pipeline with the correction for relative gains switched on.

Note that maps with the Extended emission gain correction are optimized for extended emission and should not be used for point source photemtery since the relative extended gain correction causes increased scatter (error) in the photometry if the amplitude is used for measurement. Normal gains are best for photometry of point sources in which it is the amplitude (peak height) of the source which is used for the photometry measurement. However, since the gain correction is better for mapping extended emission, aperture photometry (capturing most of the point source emission) should be improved with it.

After the relative gains have been applied, the timelines must again be destriped. However, the pipeline tries to make this process more efficient by using the previous destriper diagnostic products as a starting point.

    # Try to load the de-striper diagnostic products to speed-up re-processing
    arrays = ["PSW","PMW","PLW"]
    for array in arrays:
    diagref = level2.refs['psrc'+array.upper()+'diag']
    if diagref != None:
        diag = diagref.product
        diag = None
    # (Re-)run destriper on new Level1Context
    newscans,mapZero,diagZero, p4,p5 = destriper(level1=scansZeroPoint, array=array, nThreads=2, \
        withMedianCorrected=True, useSink=True, startParameters=diag)
    # Save diagnostic product, this time with prefix extd, into the "level2" variable
    level2.refs.put('extd'+array.upper()+'diag', ProductRef(diagZero))

The maps output from the Destriper are then passed to the zeroPointCorrection task in the pipeline (see Section 6.10.1) to produce the final absolute calibrated maps in MJy/sr.

    # Run the zeroPointCorrection tasks on extdPxW maps
    print "Running the zero point correction task"
    zeroPointMaps, zeroPointParam=zeroPointCorrection(level2=level2ZeroPoint, \
       hfiFwhm=hfiFwhm, hfi545Gain=hfi545Gain, hfi857Gain=hfi857Gain, \
       colorCorrHfi=colorCorrHfi, fluxConv=fluxConv, \
       hfi545Map=hfi545Map, hfi857Map=hfi857Map)

Finally the pipeline script writes the extended maps to FITS files a described earlier in Section and updates the Level 2 context to produce the final fully processed observation. Saving Products from the User Scripts to Local Pools

The SPIRE User Reprocessing Scripts are written to save the final products of the pipeline to FITS files. However, it may be preferable to save the products to local pools instead. This section describes how to do that for both the photometer and spectrometer scripts. The options for saving the reprocessed products from the scripts are;

  • Save to FITS file (as described above).

  • Save final products only into a new local pool.

  • Save back into existing Observation Context.

  • Change the existing Observation Context and save to a new local pool.

When saving to an existing pool, there is always the choice whether to overwrite the contents, or to create new versions of them without removing the originals.

Saving final products only

The final Level-2 products can easily be saved to a new pool instead of a FITS file by replacing the simpleFitsWriter task with (e.g. for Photometer scan map for PSW band)

saveProduct(mapPsw, pool=“myPoolName”, tag="Map made with user script")
Saving products back into the Observation Context

The SPIRE User pipelines automatically update the Observation Context and the User does not need to do anything.

We include a discussion on updating the Observation Context for information purposes only. In order to save the products back into the Observation Context, its structure needs to be taken into account. A reminder of the Photometer Observation Context layout for the Level 1 and Level 2 , and how this relates to variables in the User Scripts is given below in Table 6.4.

Table 6.4. Summary of Photometer Observation Context

Level Sub-context name Product Type variable in User Script
Level 1 .. PSP scans
Level 2 "psrcPSW" PMP mapPsw
Level 2 "psrcPMW" PMP mapPmw
Level 2 "psrcPLW" PMP mapPlw

When saving back into the observation context, it is important to get the sub-context names correct so that the products end up in the right place.

Level 1 Products:Saving the Level-1 products into the Observation Context is easy for the Photometer scripts – for example, the following statement can be added at the end of the script:

obs.level1 = level1 

Level 2 Products: For the Level-2 products, the following code lines are an example of what should be added at the end of the script where the final FITS files are written:

obs.level2.setProduct("psrcPSW", mapPsw)
Saving the Observation Context to disk

By far the simplest method, and the recommended method to save the entire Observation Context to either the same Pool or a new Pool is using the saveObservation command in the User Script (supplying a Pool name);


In the rare case where the User may want to have more control over the saving process a more detailed method is explained below. Instead of using the simple saveObservation command, the following line should can added at the end of the script:

localStoreWriter(obs, “myPoolName”)

If the Observation Context is saved back into the same pool as it was read from, the default behaviour is to create new versions of the products that have changed (rather than overwriting). Only products that have changed will actually be saved to disk as new versions. It is possible to overwrite rather than creating new versions – this is possible by modifying the HIPE property; hcss.ia.pal.pool.lstore.version .The possible values are and the effect they have on the files that are saved to disk are:

  • new: create a new version by appending a timestamp to the name (default)

  • overwrite: overwrite the existing file with the same name; no timestamp is added

  • error: produce an error if a file with the same name already exists; no timestamp is added

The property can be set initially in the hipe.props file (located in your .hcss directory). Once it is set, it can be modified in the properties panel inside Hipe (go to Edit > Preferences > Advanced..) It should also be possible to modify the behaviour at the pool level of granularity. This can be specified with property names of the form, hcss.ia.pal.pool.lstore.<pool-name>.version.

Note that it can be dangerous to change this property away from the default value because it affects all operations that write data to pools. For example, the photometer SPG pipeline writes out temporary files and will go wrong if the property is set to overwrite. Changing to overwrite files should be done with great care!


6.5.2. Large / Parallel Mode Map Troubleshooting and Tips Introduction

Most of the SPIRE maps will look very good as they come out of the pipeline. However, problems can occur and this Chapter gives guidelines in interpreting SPIRE photometer maps, possible issues and how users could overcome them with interactive analysis. Future versions of HCSS will most likely automatically solve most of the issues presented here. In most of the following examples, we consider Large Map mode (POF5), however, the discussion equally also applies to Parallel (POF9) and Small Map (POF10) mode.

If maps look strange or not what was expected, obvious initial checks would be to check that the Level 2 processing was reached for all bands (PSW/PMW/PLW), that the astrometry of maps matches that in the meta data / original HSPOT input, etc. The quality flags can also be examined. In Table 6.5 a thumbnail summary of common map artefacts is shown with links to detailed explanations. Overview of exmaple map problems

Thumbnail Summary of Map Artefacts.

Figure 6.42. Thumbnail Summary of Map Artefacts.

Table 6.5. Summary of Map Artefacts

(1) Single Scan Observation (Described in Section

(2) The Effect of a Bright Source (Described in Section

(3) Stray Light in the Maps (Described in Section

(4) Failure of the Baseline Removal (Described in Section

(5) The Effect of Thermistor Jumps (Described in Section

(6) The Effect of a Saturated Thermistor (Described in Section

(7) The Effect of Jumps in a Bolometer Signal (Described in Section

(8) The Effect on Maps due to Cooler Temperature Variations (Described in Section

(9) The Effect on Maps due to Residual Glitches (Described in Section

(10) Holes (NaN's) in the final maps (Described in Section

(11) Edge Effects in the Maps (Described in Section

(12) Large values in SPIRE error maps (Described in Section

(13) Problems with OD1304-1305 Maps (Described in Section

(14) Ringing due to large undetected glitches. (Described in Section

(15) Problems with very early SPIRE Observations. (Described in Section Default pixel sizes

For scan map observations, the default pixel sizes for the 250, 350 and 500 micron maps are 6, 10, 14 arcsec respectively. These pixel sixes were adopted using the following criteria, in descending order of priority: 1) the pixel size would not lead to too many NaNs in the final map ; 2) the median coverage map above 10 pixels in number and 3) the median error is below a certain factor times the median error achieved at FWHM/2.

The coverage uniformity was assessed after convolving the hit map by an idealized (gaussian) PSF. For each pixel size and array, the median coverage is computed, as well as the minimum and maximum coverage after excluding the 5% lowest values and the 5% largest values, and the minimum coverage after excluding the 0.5% lowest values.

The same treatment was applied to the error map to quantify sensitivity fluctuations within the field of view, but with the target pixels masked out. Missing Meta Data for Scan Lines

For Parallel Mode observations in the archive the meta data parameters numScanLinesNom and numScanLinesOrth may be both set to zero. These parameters appear in the header of the observation context. This is a problem with parallel mode observations only, large and small maps are unaffected. The number of scan lines can be estimated by counting the number of individual scan line building blocks as shown in Figure 6.11. Single Scan Observation

If a very small patch of sky is observed in Large Map mode, the result will be a single scan observation. Although, this may appear somewhat strange, it is actually what is expected for this kind of observing strategy. It is a questionable approach (see Figure 6.43 for the example obsid 1342187525) because the presence of bad bolometers creates exposed stripes on the map, especially close to the border. In the PLW case, a stripe is also visible to the right of the central scan.

PLW, PMW and PSW maps for a single scan observation.

Figure 6.43. PLW, PMW and PSW maps for a single scan observation. The Effect of a Bright Source

Sources brighter than 200 Jy at 250 microns should be observed in bright source mode. Sources below this limit can also appear very bright on SPIRE maps and can create artifacts due to diffraction effects from the telescope support structure with the Airy rings also visible as shown in Figure 6.44 left and right respectively for the example obsid 1342188168.

Artifacts in the map due to bright source effects.

Figure 6.44. Artifacts in the map due to bright source effects. Stray Light in the Maps

Stray light effects can be seen in the photometer maps if, for example, a planet crosses one of the critical zones of the telescope. In such cases the planets light can be refracted by the structure supporting the secondary mirror into the instrument focal plane (e.g. Figure 6.45).

The effect of stray light from a planet on a map.

Figure 6.45. The effect of stray light from a planet on a map. Failure of the Baseline Removal

For all SPIRE map data it is necessary to perform signal baseline removal before the map making stage of the SPIRE photometer pipeline in order to avoid striping in the final images. The standard way of doing this is using the median of the signal in each scan line as the baseline. Although this works in most cases there are examples where this correction fails . This is the most frequent suspect for any tartan looking Level-2 SPIRE products (another cause for striping effects can also be a bad temperature drift correction see Section ) In Figure 6.46, a strong extended source introduces some bias into the baseline signal and the median removal is unsuitable. More robust estimations for the true baseline signal may be included in later versions of the pipeline.

Failure of baseline removal example.

Figure 6.46. Failure of baseline removal example. The Effect of Thermistor Jumps

Sudden jumps (from higher to lower voltage) in the signal timelines have been observed in SPIRE data. Currently the cause is unknown but the phenomenon often (but not always) happens on the occurrence of a large positive or negative spike in the timelines. When such a signal jump is found in thermistor timelines it can causes problems with temperature drift correction (see the Thermistor timeline data in Figure 6.47). When using those thermistor data to correct the bolometers baseline, the same jump is then applied to all the bolometers (originally not affected by the jump). This is apparent in the maps as dark or light patches (see Figure 6.47). In future versions of HIPE, the SPIRE pipeline may address this issue but at present users may try using one thermistor (the one not affected by the jump) for the correction (note that this is not possible for PMW because only one thermistor is available) or to manually correct the thermistor signal, removing the jump. The simplest option is to just reject the affected area. Note that there are also jumps in the bolometers themselves (see Section, but in this case only single lines are affected.

The effect of jumps in thermistor channel signal.

Figure 6.47. The effect of jumps in thermistor channel signal. The Effect of a Saturated Thermistor

If a thermistor signal hits the saturation limit, the temperature drift correction process for all subsequent scan lines will fail. This in turns leads to a signal baseline such that the median removal increases the problem. In Figure 6.48, all scan lines after scan number 50 are affected. The problem is identified by plotting the median signal as a function of scan line as seen in the inset in Figure 6.48. Note that this is a very rare effect and in most of the cases, striping on the map is due to jumps in the thermistor (see Section or to variations in the cooler temperature (see Section Whereas the stripes due to thermistor jumps present an almost instantaneous change between dark and bright regions, a saturated thermistor causes a smoother gradual variation since the temperature is varying slower than a sudden jump, but having a stronger intensity than stripes due to uncorrected temperature variations. The SPIRE team is working on improving the robustness of the temperature drift correction module within the pipeline, which should improve the final maps. The current solution at present for the user is to only use the not saturated thermistor (not possible for PMW since there is only a single thermistor).

The effect of a saturated thermistor.

Figure 6.48. The effect of a saturated thermistor. The Effect of Jumps in a Bolometer Signal

A voltage jump in a single bolometer timeline results in a single stripe on the final map, due to a consequent bad baseline subtraction as shown by the white arrows in the left panel of Figure 6.49. The destriper (see Section 6.8.1) contains a method based on wavelet analysis to detect these jumps. In case of a jump detection, the bolometer's timeline affected by the problem is not used to produce the final level2 map (see Figure 6.50). In very few cases, multiple jumps on many detectors may be present causing the more serious map artifact shown in the right panel of Figure 6.49. The cause of such jumps is unknown and still under investigation. At present, possible solutions the user may want to try are to either correct the detector signal manually thus removing the jump or alternatively to reject the affected timeline by masking the bolometer.

Bolometer signal jumps in the map.

Figure 6.49. Bolometer signal jumps in the map.

Correction of bolometer signal jumps.

Figure 6.50. Correction of bolometer signal jumps. The Effect on Maps due to Cooler Temperature Variations

The SPIRE cooler must be recycled periodically to maintain a constant low temperature. After the end of the SPIRE cooler recycle, the cooler temperature is 2 mK lower than the nominal plateau (lasting approximately 40h, see middle panel of Figure 6.51)and the cooler requires 8h to reach the stable temperature of the plateau. Observations taken during this period can be affected by a temperature drift too strong to be corrected by the pipeline. As an example, in Figure 6.51, observations taken during one Observation Day are compared. The map in the left panel of Figure 6.51, was processed from an observation immediately after the cooler recycle (i.e. during the strong gradient seen in the middle panel of the figure), while the map in the right panel of the figure was processed some 8 hours after the cooler recycle ended. Large scale striping in the left panel are clearly obvious whilst the map in the right panel appears fine. The difference to effects caused by thermistor jumps is that there is a long but constant drift along the scan direction, present during the entire observation. The sudden discrete jump in the cooler temperature are referred to as Cooler Burps and happens between 6 to 7h after the cooler recycle ends every first SPIRE day. The resolution in this case is to attempt to detect and correct for the variations by checking for large variations in the subK Temperature parameter in the SPIRE Housekeeping data. If this variation is above some threshold then a flag is set in the meta data. If the coolerBurpFound flag is set then the Temperature Drift Correction task will apply additional multiplicative factors to its coefficients to correct for the cooler burp artefacts in the map as shown in Figure 6.52. The correction for cooler burps is optional and is enabled by setting the coolerBurpDetection = True flag at the beginning of the User Pipeline Scripts.

There are some cases of extremely long duration maps made in a single scan direction which may show severe effects of temperature variation accross the scans in all bands to varying severity as shown in Figure 6.53. In these cases it is improbable that all artefacts can be removed from the map although experimenting with the Coler-Burp detection on and off may help by removing some of the effects. Such large single scan direction maps may be included as Level 2.5 products if accompanying orthogonal scan observations are available and users are recommended to check these Level 2.5 products if available.

Artefacts due to Cooler Burp effect.

Figure 6.51. Artefacts due to Cooler Burp effect.

Detection and correction of Cooler Burp effects.

Figure 6.52. Detection and correction of Cooler Burp effects.

Temperature variation in long observations.

Figure 6.53. Temperature variation in long observations. The Effect on Maps Due Residual Glitches

Although deglitching is carried out during the standard processing of SPIRE observations (glitch detection, removal and reconstruction). Occasionally, glitches get through and consequently show up on the final maps as bright pixels. Glitches are caused by cosmic rays hitting the detector (affecting single detectors only) or the detector base plate (affecting multiple detectors at once). In the example in Figure 6.54, a cosmic ray hits the PLW base plate generating a glitch on all detectors, which escape detection by the deglitcher module. The result is a regular pattern of bright pixels on the final PLW map which is simply the footprint of the PLW array on the sky for a given instance of time (the duration of glitch).

An attempt to remove residual glitches can be made by turning on the 2nd Level Deglitching in the SPIRE Destriper Pipeline Task. This procedure is detailed in Section 6.8.1. Alternatively, for data reduced with versions earlier than HIPE 13, Users may want to reprocess their observation using the new SPIRE 2-Pass Pipeline (Section 6.4.1) for better residual glitch removal.

Bright pixels in the final map caused by multiples glitches.

Figure 6.54. Bright pixels in the final map caused by multiples glitches. Holes (NaN's) in the final maps

For maps without cross-linking (orthogonal scans) or no repetitions, holes (represented as NaN's) may be visible in the final maps. NaN's on the signal map, correspond to 0's on the coverage map. This is not an error with the processing but rather an obvious side-effect for maps with low coverage (see Figure 6.55). The most frequent source of NaN's in the map is when the pipeline detects a glitch but it is not able to either remove it, or to reconstruct the detector timeline. In such a case, a mask flag is raised and the affected data is excluded from the final level 2 product. For areas of low coverage the outcome of this, is a NaN on the signal map. This is particularly true for parallel mode (or for single scans), where no repetition or cross-scan is allowed in a single AOR. This is a cosmetic problem and could optionally be overcome by changing the pixel size, or using alternative, external map-making algorithms (e.g. those that use a Bayesian approach).

Holes represented as NaNs in the final signal map.

Figure 6.55. Holes represented as NaNs in the final signal map. Edge Effects in the Maps

Edge effects are often seen on SPIRE photometer maps (e.g Figure 6.55). Since these are outside of the guaranteed coverage map area, these can therefore can be ignored.

Edge effects in maps.

Figure 6.56. Edge effects in maps. Large values in SPIRE error maps

There are cases where the ratio of the error map pixels to signal map pixels are anomalously high and donut shaped artefacts appear at the positions of sources in the error maps (e.g Figure 6.57). This is due to the fact that error maps produced with the naive mapmaker contain increased errors associated with binning data from Gaussian sources. This is an artifact of the mapmaking process and is also apparent in other map making algorithms. The problem arises since the mapmaker currently calculates the value in a pixel in the error map using the standard deviation of all data falling into a map pixel. If the data lie along the sloped side of a PSF, the dispersion looks high when the data are rebinned. The data at the peak of the PSF look flatter than the sides, and so the error map value at the peak of a PSF looks lower than on the sides of the PSF (see Figure 6.58). This effect has been recreated by injecting an artificial source into a photometer map containing an additional real single point source (the upper artificial source in Figure 6.57). The torus (donut shape) in the error map for both sources is clearly seen confirming the effect is due to the binning rather than any pointing jitter or shot noise. This effect is an artefact of the map making algorithm and cannot be easily overcome, however, the problem with the error maps can be circumvented by using alternative non-map based methodsof surce extraction such as the timeline fitter method.

Donut shaped artefacts in the error maps at the positions of sources.

Figure 6.57. Donut shaped artefacts in the error maps at the positions of sources.

Anomolous errors from binning data from Gaussian sources.

Figure 6.58. Anomolous errors from binning data from Gaussian sources. Problems with OD1304-1305 Maps

For some reason as yet not understood, a small number of bolometers (PSW B5 F8 E9) are unusually noisy during ODs 1304, 1305. As a result, noisy artefacts can be seen in the final photometer maps (Figure 6.59). This effect was present HIPE versions 11 or earlier but has since been fixed in later versions by masking the offending bolometers. For reprocessing of older maps the following script mat be used to mask the bolometers in question;


# List of detectors to be masked
bolos = ['PSWB5', 'PSWE9', 'PSWF8']

# Level1 of your observation, assuming the observation context varible is named 'obs'
level1 = obs.level1

# Create new level 1
new_l1 = Level1Context()

for scan in range(0, level1.getCount()):
    # Load level 1 product, scan by scan
    data = level1.refs[scan].product
    # Change mask for selected detectors in all scans, setting it to MASTER
    for bolo in bolos:
    data['mask'][bolo].data[:] = 1

OD1304-OD1305 maps before and after masking Noisy bolometers.

Figure 6.59. OD1304-OD1305 maps before and after masking Noisy bolometers. Ringing due to large undetected glitches.

There are some cases of large undetected glitches going undetected by the deglitching algorithms resulting in ringing in the final maps. These often occur in maps with strong residual temperature drift (e.g. obsid= 1342191158). In most cases, the glitches are at the very edge of the turnaround data with a typical hit rate of 3,4 samples per pixel (See Figure 6.60). In such cases, it appears that the only samples in the pixel are the glitch samples. The ringing is expected to be confined in the turn-around data region of the map.

Large undetected glitch in the turnaround data.

Figure 6.60. Large undetected glitch in the turnaround data. Problems with very early SPIRE Observations.

Although most observations made with SPIRE generally produce very good maps there remain some examples, especially from the early stages of the mission where artefacts remain uncorrected in the final maps. Observations carried out in the very early phases (commissioning phase in the first 70 days of the Herschel mission) or during Performance Verification (PV) phase from operational day (OD) 70-168 may be subject to various caveats such as different bias amplitudes and hence flux calibration or failed temperature drift corrections since the pipeline is not optimized to deal with the early commisioning and verification observations. An example is shown below in Figure 6.61 for an early observation on OD42 of Uranus (1342179049) where the scratch on the map is the result of failed temperature-drfit correction due to a cooler burp. Users are advised to use such ealry observations at their own risk and discretion.

Example of failed Temperature-drfit correction in early SPIRE observation.

Figure 6.61. Example of failed Temperature-drfit correction in early SPIRE observation. Flux descrepencies in low coverage in staurated map regions

Careful consideration is required for observations where there exists a sharp gardient between faint and bright regions, especially where external mapmakers to HIPE are being used. In such scenarios, individual bolometers go outside their electronic limits (saturate) as they cross into a high surface brightness region. However, the bolometers do not all saturate at the same surface brightness level. The naive mapmaker in HIPE identifies all of the samples falling within an individual map pixel. The signal for the pixel is then calculated using the mean of the sample signals in the pixel. Pixels near bright compact sources cover regions where the surface brightness varies greatly. If some detectors travel up or down the gradient, they will produce usable data only at the faint end of the pixel. In this case, the pixel signal value is weighted towards the faint end of the surface brightness in the region, so the signals are no longer reliable. In most of the central area of the map, the coverage values are around 30, but near sources that saturate, the coverage drops below 20, and a few of the pixels have coverage values of 1-3. This indicates that most of the detectors saturated when inside the regions covered by these map pixels. In summary, for bright sources and strong gradients, saturation does not occur sharply but gradually across the arrays. This can be considered equivalent to looking at the edge of image in terms of coverage and signal to noise.

The use of some external mappers will exacerbate this effect since they not assign the flux density of a sample to an individual map pixel but instead spread it out over several map pixels using, for example, a drizzle technique. At the edges of bright regions where all detectors are saturated, the signal may be distributed from nearby low surface brightness regions into pixels where no detectors produced usable signal as they crossed through the pixels. This gives the illusion that signal is measured in these regions.

The recommendation to Users is that when using data around saturated regions, they should check their coverage map. If the values in the coverage map are abnormally low compared to the surrounding area,the values in the signal map should not be used. Flux Discrepancies in Fast Parallel Mode Mosaiced Maps

Some examples of the Level 3 products for Fast Parallel mode have been found to have photometry discrepancies where the measured fluxes are around 10 percent less due to a lack of astrometry astrometry correction between the Level 2 and Level 3 creation. All maps in this comparison are in fast parallel mode. The more maps you combine the more the beam broadens. This may be fixed in later versions of HIPE by automatic astrometry correction. Uncorrected SSO Maps

Note that moving object SSO maps are automatically corrected by the standard processing pipelines at the HSC (See Section 6.1.1). However, if the SSO observations are re-processed using the User Pipelines then care must be taken to ensure that 2nd Level Deglitching is turned OFF in the Destriper, or else deglitching will be attempted on the smeared frame map and result in flagging of most pixels as glitches. To turn the 2nd Level deglitching off in the Destriper, set the parameter l2DeglitchRepeat=0.

Note that additionally, for the purposes of Photometry, except for fast moving sources (sources visibly smeared on the maps), fluxes measured from the non-motioned corrected map are more consistent than fluxes measured on the motin corrected maps.