next up previous contents
Next: Further Reading Up: Mosaicing Previous: Mosaicing   Contents

Scanning Interferometer

The co-planar approximation of Eq. 14.1.1 for a pointing direction given by $(l_o,m_o)$ can be written as

\begin{displaymath}
V(u,v,l_o,m_o)=\int\int I(l,m)B(l-l_o,m-m_o)e^{2\pi\iota (ul+vm)} {dldm}.
\end{displaymath} (14.3.9)

Here we also assume that $B$ is independent of the pointing direction and we label $V$ with not just the $(u,v)$ co-ordinates, but also with pointing direction since visibilities for different directions will be used in the analysis that follows. The advantage of writing the visibility as in Eq. 14.3.9 is that the pointing center (given by ($l_o,m_o$)) and the phase center (given by $(l,m)=(0,0)$) are separated.

$V(0,0,l_o,m_o)$ represents the single dish observation in the direction ($l_o,m_o$) and is just the convolution of the primary beam with the source brightness distribution, exactly as expected intuitively. Extending the intuition further, as is done in mapping with a single dish, we need to scan the source around $(l_o,m_o)$ with the interferometer, which is equivalent to scanning with a single dish with a primary beam of the size of the synthesized beam of the interferometer. Then Fourier transforming $V(u,v,l_o,m_o)$ with respect to ($l_o,m_o$), assuming that $B$ is symmetric, one gets, from Eq. 14.3.9

\begin{displaymath}
\int\int V(u,v,l_o,m_o)e^{2\pi\iota (u_o l_o + v_o
m_o)}dl_o dm_o=b(u_o,v_o)i(u+u_o,v+v_o),
\end{displaymath} (14.3.10)

where ($u_o,v_o$) corresponds to the direction ($l_o,m_o$) and $b \rightleftharpoons
B$ and $i \rightleftharpoons I$. This equation essentially tells us the following: Fourier transform of the visibility with respect to the pointing directions, from a scanning interferometer is equal to the visibility of the entire source modulated by the Fourier transform of the primary beams for each pointing direction. For a given direction ($l_o,m_o$) we can recover spatial frequency information spread around a nominal point ($u,v$) by an amount $D/\lambda$ where $D$ is the size of the dish. In terms of information, this is exactly same as recovering spatial information smaller than the size of the resolution of a single dish by scanning the source with a single dish. As in the case of a single dish, continuous scanning is not necessary and two points separated by half the primary beam is sufficient. In principle then, by scanning the interferometer, one can improve the short spacings measurements of $V$, which is crucial for mapping large fields of view.

Image of the sky can now be made using the full visibility data set (made using the Eq. 14.3.10). However, this involves the knowledge of Fourier transform of the sky brightness distribution, which in-turn is approximated after deconvolution. Hence, in practice one uses the MEM based image recovery where one maximizes the entropy given by

\begin{displaymath}
H=- \sum \limits_k I_k ln {I_k \over M_k},
\end{displaymath} (14.3.11)

with $\chi^2$ evaluated as
\begin{displaymath}
\chi^2=\sum\limits_k
{{\vert V(u_k,v_k,l_{ok},m_{ok})-V^M(u_...
...,m_{ok})\vert^2} \over
{\sigma^2_{V(u_k,v_k,l_{ok},m_{ok})}}},
\end{displaymath} (14.3.12)

where $V^M(u_k,v_k,l_{ok},m_{ok})$ is the model visibility evaluated using Eq. 14.3.9. For calculation of $\Delta\chi^2$ in each iteration is estimated by the following steps:

The operation of primary beam correction on the residual image is understood by the following argument: For any given pointing, an interferometer gathers radiation within the primary beam. In the image plane then, any feature, outside the range of the primary beam would be due to the side lobes of the synthesized beam and must be suppressed before computation of $\Delta\chi^2$ and this is achieved by primary beam correction, which essentially divides the image by gaussian which represents the main lobe of the antenna radiation pattern.

This approach (rather than joint deconvolution) has several advantages.

  1. Data from potentially different interferometers for different pointings can be used
  2. Weights on each visibility from each pointing are used in the entire image reconstruction procedure
  3. Single-dish imaging emerges as a special case
  4. It is fast for extended images

The most important advantage that one gets by MEM reconstruction is that the deconvolution is done simultaneously on all points. That this is an advantage over joint-deconvolution can be seen as follows: If a point source at the edge of the primary beam is sampled by 4 different pointings of the telescope, this procedure would be able to use 4 times the data on the same source as against data from only one pointing in joint-deconvolution (where deconvolution is done separately on each pointing). This, apart from improvement in the signal-to-noise ratio also benefits from a better $uv$-coverage available.

Flexible software for performing Mosaic-ed observations is one of the primary motivation driving the AIPS++ project in which algorithms to handle mosaic-ed observations would be available in full glory.


next up previous contents
Next: Further Reading Up: Mosaicing Previous: Mosaicing   Contents
NCRA-TIFR