Maps can be produced in several flavors
(with different processing options), and compared to select
the best reduction strategy. It is
recommended to experiment with the following options:
* for small fields of view: /minimap, /flat
* for bright and complex
fields: /galactic
* for PACS data, to remove artefacts caused
by discontinuities affecting whole array rows: /jumps_pacs
* for wide diffuse fields observed with
PACS: /nothermal
Please always pay attention to the weight
map (the fourth plane of the output cube).
It is useful to exclude areas of low
coverage or affected by saturation.
There may be large astrometric offsets
between different obsids for large fields, which of course
degrades the data quality. One way to detect
this problem is to make a separate map for each obsid,
using the same reference header for all
(and the /nocross option), and to compare the positions
of compact sources in these maps. Offsets
can be removed with the offset_ra_as and offset_dec_as
parameters.
a) In which circumstances do I have to use the /galactic option ?
If the field of view contains bright extended emission on scales
larger than the map,
and if you want to preserve brightness gradients within the field as
well as possible,
then it is recommended to use the /galactic
option. A typical example is a survey of
a Galactic star formation region, hence the name of the option.
Cases when you do NOT need the option are:
- You prefer to subtract the general gradient from the map.
- Any very extended emission is faint and will not perturb the
determination
of robust baselines (example: a nearby galaxy with foreground
cirrus clouds).
b) How can I check whether the deglitching flagged some
real sources or not ?
Since samples affected by glitches are never interpolated but
masked, it is
relatively easy to detect abusive deglitching by inspecting the
weight map
(the fourth plane of the cube). If you see significantly lower
weight values
specifically at the location of some compact sources, then either
the deglitching
was too aggressive (in this case, thanks to send a bug report !), or
it is the
deglitching done in HIPE that went wrong, or the receivers were
saturated.
To know whether Scanamorphos
is the culprit, run it again with the /noglitch
option (disabling the deglitching) and inspect the weight map again.
c) How can I efficiently mask residual glitches, hot
pixels or transient features in the
final map that may have escaped detection ?
A mask can be built from the combination of the error and signal
maps.
Pixels with elevated error values and error to brightness ratios
greater
than one are most probably affected by glitches, and can be detected
easily.
Starting with version 5, a "cleaned" map is also provided for
observations
comprising less than four or six scans. This map is built by
projecting each scan
separately, and then weighting each scan map by its inverse variance
map.
This map is provided to enable easy identification of artefacts, but
should
not be used in place of the map projected in the standard way
without great care.
d) What
is
the meaning of the different planes of the output ?
If the observation contains at least four or six scans (depending on
the instrument
and map size), there are four planes. If it contains fewer scans,
there is an additional
fifth plane. Here is their definition:
1st plane: signal map (where the signal of each bolometer
is weighted by its inverse
square white noise in each scan)
2nd plane: error map (the error in each pixel is defined
as the statistical error
on the mean, using the unbiased variance estimator for weighted
data)
3rd plane: map of the drifts that have been subtracted
from the data
(weighted in the same way as the signal)
4th plane: weight map
5th plane (if present): signal map weighted to exclude
noisy scans
The 5th plane may be cosmetically superior to the 1st
plane, but it is not
advised to use it for scientific analysis, unless you know what you
are doing.
It is designed to filter out high-frequency artefacts that have not
been detected
during the processing, because with three scans or less, the
redundancy is low.
It allows you to easily spot these artefacts, just by comparing the
1st and 5th planes.
See also question c) above.
e) The
processing
aborts during the destriping step, but only for a subset of my
observations.
What could be the cause ?
One possible cause is that the field of view is too small to apply
the destriping.
If there are not enough resolution elements across the map, then the
destriping
is no longer necessary, and is also no longer possible. To assess
whether this is the
case for your data, you can check the value of the field size (in
degrees) that is printed
to the screen during the baseline subtraction: if it is on the order
of 0.1 degree,
then the map is indeed too small (otherwise, please send a bug
report).
To deactivate the destriping for very small maps, use the /minimap option.
f) What
is
the best way to process very small maps in which the source fills
the region with
nominal coverage ?
If the area covered by data taken in only one scan direction is
comparable to the area
of the region with nominal coverage, there may not be enough
redundancy for the
destriping to work properly (see also question
e) above). This is not a concern if the
source is compact, but it is if the extent of the source is
comparable to the size of the
map. In this case, it is advised to process the data with and
without the /minimap
option, and to select the best map.
If this is not enough, you can also try the /flat
option. It was designed to force the
sky background to be flat, in case robust baselines cannot be
derived from the data
because the observation strategy was not well suited to the target.
g) How
should
the relative gain corrections be applied to SPIRE data ?
These corrections account for the fact that the beam area is not
uniform for each
SPIRE array, but varies from bolometer to bolometer. They are thus
useful to better
calibrate extended emission. The corrections included in
Scanamorphos have also
been available in HIPE since its version 8. The gains should be
applied only once:
- Either apply them in HIPE and then select the /nogains
option in Scanamorphos.
- Or you do not need to change your HIPE script, and then they will
be applied
by default in Scanamorphos.
Make sure to use the branch that matches the HIPE version with which
you
processed the data up to level 1.
h) How
are
the average SPIRE beams affected by the projection method ?
The projection method affects only the point response functions
(PRFs), which are the
result of projecting data from a detector with a fixed beam or point
spread function (PSF)
onto a given pixel grid, which can be changed. The projection may
distribute the flux
of a source slightly differently (each flux sample can be mapped to
a single pixel or to
several adjacent pixels), but it does not change the total flux of
the source.
- Thus, to convert map units from Jy/beam to Jy/pixel, you have to
use some fixed beam
areas, whatever software was used. These beam areas are those
determined by the ICC.
- If you are doing aperture photometry or computing the radial
profile of a compact
source, or if you want to subtract model sources from a map,
you will need to take
into account the fact that the FWHM of the SPIRE PRFs in
Scanamorphos maps is
1.5% larger than in maps built with the nearest-neighbor
projection.
i) Why
are
there some streaks with slightly lower coverage in some weight
maps ?
Such features can appear beginning with version 21. They are due to
the fact that
large pointing errors are now detected and masked (via scan speed
anomalies).
An example can be seen in this weight map.
j) What are the PACS distortion flatfields ?
PACS bolometers have fiducial sizes of 3.2" and 6.4" in the blue and
red arrays, respectively,
but their physical sizes are different. Taking the actual shapes of
the bolometers into account
during the projection is a negligible improvement, but taking the
actual bolometer areas
into account is important for the flux calibration, because the mean
bolometer area does not
coincide with the fiducial area. The necessary calibration files
were not issued and distributed
for use in external map-makers until recently (during the
preparation for HIPE 12).
The distortion flatfields (ratios of the actual areas to the
fiducial areas) are shown here for the blue array and here for the red array . They have
been made by Javier Gracia Carpio
in HIPE 12, using version 63 of the calibration tree, with the convertToFixedPixelSize task.
1) My field of view contains a lot of extended emission
that I want to recover as well as possible, and
the observation is long. How can the slicing into several
sub-fields be avoided ?
Try increasing the parameter called "max_n_samples",
on line 161 (in version 17) of the
main program scanamorphos.pro . It has to be greater than the value
of "nt * nb" that is
printed to the terminal at the beginning of the run. To avoid
crashes during array allocation,
you will need to have access to a machine with enough memory.
2) I would like to obtain the best possible time
resolution to compute the average brightness drift,
but the minimum timescale set by the code for the average drift is
much longer than
that set for the individual drifts, because the observation is
long. I am
also not interested in recovering large-scale gradients in the
map, and the field does
not contain any bright extended emission. What can I do ?
One solution is to let the code slice the field of views into
several blocks, thus
artificially reducing the length of the observation for each block.
The timescale
chosen for the average drift will then be the shortest afforded by
the level of
redundancy in the data, and will not be limited by array size
considerations.
To achieve this, the "nblocks" keyword can be
used. Try several values until you find
one that produces a timescale for the average drift that is close
enough to the minimum
timescale for the individual drifts (this information is printed to
the screen).
Slicing the field of view also artificially decreases the length of
a scan leg,
and the linear baselines (the long-timescale, dominant component of
the drifts)
are then better determined, because the fits are made on reduced
time intervals.
The "nblocks" keyword
should be used only in exceptional circumstances, and never
on fields containing bright emission on spatial scales that are a
significant fraction
of the map size.
3) How can several observations of the same field taken
at different epochs best be combined ?
Such datasets differ from nominal ones in two respects:
- Between two epochs, the map is rotated by an angle not necessarily
congruent with 90 degrees.
- If the parallel mode is used, the center of the map is also
shifted.
It is however possible to process them in the same way, as long as
the area of overlap between
the different epochs has to be much greater than the area covered by
non-overlapping regions.
If you have measured some astrometric offsets between the different
observations,
you can now supply corrections as input by using these parameters: "offset_ra_as"
and "offset_dec_as". They are angular
distances.
4) I have observations of adjacent fields that overlap by
an area much smaller than the area of
each field. Yet, I would like to take advantage of the increased
redundancy in this small
overlap area to improve the processing. How can I manage this ?
The best solution (the most economical in terms of computer
resources and the most efficient
for the map quality) is this:
- Mosaic the maps produced independently from each observation.
- From the mosaic, extract a map covering the whole overlap area and
define the associated
fits header, with the pixel size and map orientation that you
want on output.
- Run Scanamorphos on all
the scans by supplying this header with the "hdr_ref" parameter.
5) Is
it possible to combine data taken at different scan speeds or
sampling rates ?
It is impossible to process such data together, because the
stability length of the drifts depends
on both scan speed and sampling rate. Adopting the same length for
dissimilar observations
would entail degrading the drift correction for at least one of the
observations.
However, the data can be combined after the map-making stage: make
separate maps,
by using the "hdr_ref" keyword (take the header from
the first created map and use it for
the other maps to enforce the same astrometry), and then combine the
maps with
your favorite data analysis tool (you can also use the module named
stitch_blocks.pro).