Array processing

Not to be confused with Array processor or Array data structure.

Array Processing: Signal Processing is a wide area of research that extends from the simplest form of 1-D signal processing to the complex form of M-D and array signal processing. This article presents a short survey of the concepts, principles and applications of Array Processing. Array structure can be defined as a set of sensors that are spatially separated, e.g. antennas. The basic problem that we attend to solve by using array processing technique(s) is to:

Precisely, we are interested in solving these problems in noisy environments (in the presence of noise and interfering signals). Estimation Theory is an important and basic part of signal processing field, which used to deal with estimation problem in which the values of several parameters of the system should be estimated based on measured/empirical data that has a random component. As the number of applications increases, estimating temporal and spatial parameters become more important. Array processing emerged in the last few decades as an active area and was centered on the ability of using and combining data from different sensors (antennas) in order to deal with specific estimation task (spatial and temporal processing). In addition to the information that can be extracted from the collected data the framework uses the advantage prior knowledge about the geometry of the sensor array to perform the estimation task. Array processing is used in radar, sonar, seismic exploration, anti-jamming and wireless communications. One of the main advantages of using array processing along with an array of sensors is a smaller foot-print. The problems associated with array processing include the number of sources used, their direction of arrivals, and their signal waveforms.[1][2][3][4]

Sensors Array

There are four assumptions in array processing. The first assumption is that there is uniform propagation in all directions of isotropic and non-dispersive medium. The second assumption is that for far field array processing, the radius of propagation is much greater than size of the array and that there is plane wave propagation. The third assumption is that there is a zero mean white noise and signal, which shows uncorrelation. Finally, the last assumption is that there is no coupling and the calibration is perfect.[1]

Applications

The ultimate goal of sensor array signal processing is to estimate the values of parameters by using available temporal and spatial information, collected through sampling a wavefield with a set of antennas that have a precise geometry description. The processing of the captured data and information is done under the assumption that the wavefield is generated by a finite number of signal sources (emitters), and contains information about signal parameters characterizing and describing the sources. There are many applications related to the above problem formulation, where the number of sources, their directions and locations should be specified. To motivate the reader, some of the most important applications related to array processing will be discussed.

array processing concept was closely linked to radar and sonar systems which represent the classical applications of array processing. The antenna array is used in these systems to determine location(s) of source(s), cancel interference, suppress ground clutter. Radar Systems used basically to detect objects by using radio waves. The range, altitude, speed and direction of objects can be specified. Radar systems started as military equipments then entered the civilian world. In radar applications, different modes can be used, one of these modes is the active mode. In this mode the antenna array based system radiates pulses and listens for the returns. By using the returns, the estimation of parameters such as velocity, range and DOAs (direction of arrival) of target of interest become possible. Using the passive far-field listening arrays, only the DOAs can be estimated. Sonar Systems (Sound Navigation and Ranging) use the sound waves that propagate under the water to detect objects on or under the water surface. Two types of sonar systems can be defined the active one and the passive one. In active sonar, the system emits pulses of sound and listens to the returns that will be used to estimate parameters. In the passive sonar, the system is essentially listening for the sounds made by the target objects. It is very important to note the difference between the radar system that uses audio waves and the sonar system that uses sound waves, the reason why the sonar uses the sound wave is because sound waves travel farther in the water than do radar and light waves. In passive sonar, the receiving array has the capability of detecting distant objects and their locations. Deformable array are usually used in sonar systems where the antenna is typically drawn under the water. In active sonar, the sonar system emits sound waves (acoustic energy) then listening and monitoring any existing echo (the reflected waves). The reflected sound waves can be used to estimate parameters, such as velocity, position and direction etc. Difficulties and limitations in sonar systems comparing to radar systems emerged from the fact that the propagation speed of sound waves under the water is slower than the radio waves. Another source of limitation is the high propagation losses and scattering. Despite all these limitations and difficulties, sonar system remains a reliable technique for range, distance, position and other parameters estimation for underwater applications.[3][5]

Radar System

NORSAR is an independent geo-scientific research facility that was founded in Norway in 1968. NORSAR has been working with array processing ever since to measure seismic activity around the globe.[6] They are currently working on an International Monitoring System which will comprise 50 primary and 120 auxiliary seismic stations around the world. NORSAR has ongoing work to improve array processing to improve monitoring of seismic activity not only in Norway but around the globe.[7]

Communication can be defined as the process of exchanging of information between two or more parties. The last two decades witnessed a rapid growth of wireless communication systems. This success is a result of advances in communication theory and low power dissipation design process. In general, communication (telecommunication) can be done by technological means through either electrical signals (wired communication) or electromagnetic waves (wireless communication). Antenna arrays have emerged as a support technology to increase the usage efficiency of spectral and enhance the accuracy of wireless communication systems by utilizing spatial dimension in addition to the classical time and frequency dimensions. Array processing and estimation techniques have been used in wireless communication. During the last decade these techniques were re-explored as ideal candidates to be the solution for numerous problems in wireless communication. In wireless communication, problems that affect quality and performance of the system may come from different sources. The multiuser –medium multiple access- and multipath -signal propagation over multiple scattering paths in wireless channels- communication model is one of the most widespread communication models in wireless communication (mobile communication).

Multi-Path Communication Problem in wireless communication Systems

In the case of multiuser communication environment, the existence of multiuser increases the inter-user interference possibility that can affect quality and performance of the system adversely. In mobile communication systems the multipath problem is one of the basic problems that base stations have to deal with. Base stations have been using spatial diversity for combating fading due to the severe multipath. Base stations use an antenna array of several elements to achieve higher selectivity. Receiving array can be directed in the direction of one user at a time, while avoiding the interference from other users.

Array processing techniques got on much attention from medical and industrial applications. In medical applications, the medical image processing field was one of the basic fields that use array processing. Other medical applications that use array processing: diseases treatment, tracking waveforms that have information about the condition of internal organs e.g. the heart, localizing and analyzing brain activity by using bio-magnetic sensor arrays.[8]

Speech enhancement and processing represents another field that has been affected by the new era of array processing. Most of the acoustic front end systems became fully automatic systems (e.g. telephones). However, the operational environment of these systems contains a mix of other acoustic sources; external noises as well as acoustic couplings of loudspeaker signals overwhelm and attenuate the desired speech signal. In addition to these external sources, the strength of the desired signal is reduced due to the relatively distance between speaker and microphones. Array processing techniques have opened new opportunities in speech processing to attenuate noise and echo without degrading the quality of and affecting adversely the speech signal. In general array processing techniques can be used in speech processing to reduce the computing power (number of computations) and enhance the quality of the system (the performance). Representing the signal as a sum of sub-bands and adapting cancellation filters for the sub-band signals can reduce the demanded computation power and lead to a higher performance system. Relying on multiple input channels allows designing systems of higher quality comparing to systems that use single channel and solving problems such as source localization, tracking and separation, which cannot be achieved in case of using single channel.[9]

Astronomical environment contains a mix of external signals and noises that affect the quality of the desired signals. Most of the arrays processing applications in astronomy are related to image processing. The array used to achieve a higher quality that is not achievable by using a single channel. The high image quality facilitates quantitative analysis and comparison with images at other wavelengths. In general, astronomy arrays can be divided into two classes: the beamforming class and the correlation class. Beamforming is a signal processing techniques that produce summed array beams from a direction of interest – used basically in directional signal transmission or reception- the basic idea is to combine elements in a phased array such that some signals experience destructive inference and other experience constructive inference. Correlation arrays provide images over the entire single-element primary beam pattern, computed off-line from records of all the possible correlations between the antennas, pairwise.

One antenna of the Allan Telescope Array

In addition to these applications, many applications have been developed based on array processing techniques: Acoustic Beamforming for Hearing Aid Applications, Under-determined Blind Source Separation Using Acoustic Arrays, Digital 3D/4D Ultrasound Imaging Array, Smart Antennas, Synthetic aperture radar, underwater acoustic imaging, and Chemical sensor arrays...etc.[3][4][5]

General Model and Problem Formulation

Consider a system that consists of array of r arbitrary sensors that have arbitrary locations and arbitrary directions (directional characteristics) which receive signals that generated by q narrow band sources of known center frequency ω and locations θ1, θ2, θ3, θ4 … θq. since the signals are narrow band the propagation delay across the array is much smaller than the reciprocal of the signal bandwidth and it follows that by using a complex envelop representation the array output can be expressed (by the sense of superposition) as :[3][5][8]
\textstyle x(t)=\sum_{K=1}^q a(\theta_k)s_k(t)+n(t)

Where:

The same equation can be also expressed in the form of vectors:
\textstyle \bold x(t) = A(\theta)s(t) + n(t)

If we assume now that M snapshots are taken at time instants t1, t2 … tM, the data can be expressed as:
\bold X = \bold A(\theta)\bold S + \bold N

Where X and N are the r × M matrices and S is q × M:
\bold X = [x(t_{1}), ......, x(t_{M})]
\bold N = [n(t_{1}), ......, n(t_{M})]
\bold S = [s(t_{1}), ......, s(t_{M})]

Problem definition
“The target is to estimate the DOA’s θ1, θ2, θ3, θ4 …θq of the sources from the M snapshot of the array x(t1)… x(tM). In other words what we are interested in is estimating the DOA’s of emitter signals impinging on receiving array, when given a finite data set {x(t)} observed over t=1, 2 … M. This will be done basically by using the second-order statistics of data”[5][8]

In order to solve this problem (to guarantee that there is a valid solution) do we have to add conditions or assumptions on the operational environment and\or the used model? Since there are many parameters used to specify the system like the number of sources, the number of array elements …etc. are there conditions that should be met first? Toward this goal we want to make the following assumptions:[1][3][5]
1. The number of signals is known and is smaller than the number of sensors, q<r.
2. The set of any q steering vectors is linearly independent.
3. Isotropic and non-dispersive medium – Uniform propagation in all directions.
4. Zero mean white noise and signal, uncorrelated.
5. Far-Field.
a. Radius of propagation >> size of array.
b. Plane wave propagation.

Throughout this survey, it will be assumed that the number of underlying signals, q, in the observed process is considered known. There are, however, good and consistent techniques for estimating this value even if it is not known.

Estimation Techniques

In general, parameters estimation techniques can be classified into: spectral based and parametric based methods. In the former, one forms some spectrum-like function of the parameter(s) of interest. The locations of the highest (separated) peaks of the function in question are recorded as the DOA estimates. Parametric techniques, on the other hand, require a simultaneous search for all parameters of interest. The basic advantage of using the parametric approach comparing to the spectral based approach is the accuracy, albeit at the expense of an increased computational complexity.[1][3][5]

Spectral–Based Solutions

Spectral based algorithmic solutions can be further classified into beamforming techniques and subspace-based techniques.

Beamforming technique

The first method used to specify and automatically localize the signal sources using antenna arrays was the beamforming technique. The idea behind beamforming is very simple: steer the array in one direction at a time and measure the output power. The steering locations where we have the maximum power yield the DOA estimates. The array response is steered by forming a linear combination of the sensor outputs.[3][5][8]
Approach overview
\textstyle 1.\ R_{x}= \frac{1}{M}\sum_{t=1}^M x(t) X^{*}(t)
\textstyle 2.\ Calculate\ B(W_{i})=F^{*}R_{x}F(W_{i})
\textstyle 3.\ Find\ Peaks\ of\ B(W_{i})\ for\ all\ possible\ w_{i}'s.
\textstyle 4.\ Calculate\ \theta_{k},\ i=1, .... q.

Where Rx is the sample covariance matrix. Different beamforming approaches correspond to different choices of the weighting vector F. The advantages of using beamforming technique are the simplicity, easy to use and understand. While the disadvantage of using this technique is the low resolution.

Subspace-based technique

Many spectral methods in the past have been called upon the spectral decomposition of a covariance matrix to carry out the analysis. A very important breakthrough came about when the eigen-structure of the covariance matrix was explicitly invoked, and its intrinsic properties were directly used to provide a solution to an underlying estimation problem for a given observed process. A class of spatial spectral estimation techniques is based on the eigen-value decomposition of the spatial covariance matrix. The rationale behind this approach is that one wants to emphasize the choices for the steering vector a(θ) which correspond to signal directions. The method exploits the property that the directions of arrival determine the eigen structure of the matrix.
The tremendous interest in the subspace based methods is mainly due to the introduction of the MUSIC (Multiple Signal Classification) algorithm. MUSIC was originally presented as a DOA estimator, then it has been successfully brought back to the spectral analysis/system identification problem with it is later development.[3][5][8]

Approach overview
\textstyle 1.\ Subspace\ decomposition\ by\ performing\ eigenvalue\ decomposition:
\textstyle R_{x}=\bold A \bold R_{s} \bold A^{*} + \sigma^{2}I=\sum_{k=1}^M \lambda_{k}e_{k}r_{k}^{*}
\textstyle 2.\ span\{\bold A\}=spane\{e1,....,e_{d}\}=span\{\bold E_{s}\}.
\textstyle 3.\ Check\ which\ a(\theta)\ \epsilon span\{\bold E_{s}\}\ or\ \bold P_{A}a(\theta)\ or\ P_{\bold A}^{\perp}a(\theta),\ where\ \bold P_{A}\ is\ a\ projection\ matrix.
\textstyle 4.\ Search\ for\ all\ possible\ \theta\ such\ that: \left | P_{\bold A}^{\perp}a(\theta) \right |^{2} = 0\ or\ M(\theta)=\frac{1}{P_{A}a(\theta)} =\infty
\textstyle 5.\ After\ EVD\ of\ R_{x}:
\textstyle P_{A}^{\perp}=I-E_{s}E_{s}^{*}=E_{n}E_{n}^{*}
\textstyle  where\ the\ noise\ eigenvector\ matrix\ E_{n}=[e_{d}+1, .... , e_{M}]

MUSIC spectrum approach use a single realization of the stochastic process that is represent by the snapshots x (t), t=1, 2 …M. MUSIC estimates are consistent and they converge to true source bearings as the number of snapshots grows to infinity. A basic drawback of MUSIC approach is its sensitivity to model errors. A costly procedure of calibration is required in MUSIC and it is very sensitive to errors in the calibration procedure. The cost of calibration increases as the number of parameters that define the array manifold increases.

Parametric–Based Solutions

While the spectral-based methods presented in previous section are computationally attractive, they do not always yield sufficient accuracy. In particular, for the cases when we have highly correlated signals, the performance of spectral-based methods may be insufficient. An alternative is to more fully exploit the underlying data model, leading to so-called parametric array processing methods. The cost of using such methods to increase the efficiency is that the algorithms typically require a multidimensional search to find the estimates. The most common used model based approach in signal processing is the maximum likelihood (ML) technique. This method requires a statistical framework for the data generation process. When applying the ML technique to the array processing problem, two main methods have been considered depending on the signal data model assumption. According to the Stochastic ML, the signals are modeled as Gaussian random processes. On the other hand, in the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival.[3][5][8]

Stochastic ML approach

The stochastic maximum likelihood method is obtained by modeling the signal waveforms as a Gaussian random process under the assumption that the process x(t) is a stationary, zero-mean, Gaussian process that is completely described by its second-order covariance matrix. This model is a reasonable one if the measurements are obtained by filtering wide-band signals using a narrow band-pass filter.
Approach overview
\textstyle 1.\ Find\ W_{K}\ to\ minimize:
\textstyle min_{a^{*}(\theta_{k}w_{k}=1)}\ E\{\left |W_{k}X(t) \right |^{2}\}
\textstyle=min_{a^{*}(\theta_{k}w_{k}=1)}\ W_{k}^{*}R_{k}W_{k}
\textstyle 2.\ Use\ the\ langrange\ method:
\textstyle min_{a^{*}(\theta_{k}w_{k}=1)}\ E\{\left |W_{k}X(t) \right |^{2}\}
\textstyle=min_{a^{*}(\theta_{k}w_{k}=1)}\ W_{k}^{*}R_{k}W_{k}+ 2\mu(a^{*}(\theta_{k})w_{k}\Leftrightarrow 1)
\textstyle 3.\ Differentiating\ it,\ we\ obtain
\textstyle R_{x}w_{k}=\mu a(\theta_{k}),\ or\ W_{k} = \mu R_{x}^{-1}a(\theta_{k})
\textstyle 4.\ since
\textstyle a^{*}(\theta_{k})W_{k}=\mu a(\theta_{k})^{*}R_{x}^{-1}a(\theta_{k})=1
\textstyle Then
\textstyle \mu=a(\theta_{k})^{*}R_{x}^{-1}a(\theta_{k})
\textstyle 5.\ Capon's\ Beamformer
\textstyle W_{k}=R_{x}^{-1}a(\theta_{k})/(a^{*}(\theta_{k})R_{x}^{-1}a(\theta_{k}))

Deterministic ML approach

While the background and receiver noise in the assumed data model can be thought of as emanating from a large number of independent noise sources, the same is usually not the case for the emitter signals. It therefore appears natural to model the noise as a stationary Gaussian white random process whereas the signal waveforms are deterministic (arbitrary) and unknown. According to the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival. This is a natural model for digital communication applications where the signals are far from being normal random variables, and where estimation of the signal is of equal interest.[3][4]

Correlation spectrometer

The problem of computing pairwise correlation as a function of frequency can be solved by two mathematically equivalent but distinct ways. By using Discrete Fourier Transform (DFT) it is possible to analyze signals in the time domain as well as in the spectral domain. The first The first approach is "XF" correlation because it first cross-correlates antennas (the "X" operation) using a time-domain "lag" convolution, and then computes the spectrum (the "F" operation) for each resulting baseline. The second approach "FX" takes advantage of the fact that convolution is equivalent to multiplication in Fourier domain. It first computes the spectrum for each individual antenna (the F operation), and then multiplies pairwise all antennas for each spectral channel (the X operation). A FX correlator has an advantage over a XF correlators in that the computational complexity is O(N2). Therefore, FX correlators are more efficient for larger arrays.[10]

Correlation spectrometers like the Michelson interferometer vary the time lag between signals obtain the power spectrum of input signals. The power spectrum S_{\text{XX}}(f) of a signal is related to the its autocorrelation function by a Fourier transform:[11]

S_{\text{XX}}(f) = \int_-\infty^\infty R_{\text{XX}}(\tau) \cos(2 \pi f \tau),\mathrm{d}\tau

 

 

 

 

(I)

where the autocorrelation function R_{\text{XX}}(\tau) for signal X as a function of time delay \tau is

R_{\text{XX}}(\tau) = \left( V_X(t) V_X(t + \tau)\right)

 

 

 

 

(II)

Cross-correlation spectroscopy with spatial interferometry, is possible by simply substituting a signal with voltage V_Y(t) in equation Eq.II to produce the cross-correlation R_{\text{XY}}(\tau) and the cross-spectrum S_{\text{XY}}(f).

Example: Spatial Filtering

In radio astronomy, RF interference must be mitigated to detect and observe any meaningful objects and events in the night sky.

An array of radio telescopes with an incoming radio wave and RF interference

Projecting Out The Interferer

For an array of Radio Telescopes with a spatial signature of the interfering source \mathbf{a} that is not a known function of the direction of interference and its time variance, the signal covariance matrix takes the form:

\mathbf{R} = \mathbf{R}_v + \sigma_s^2 \mathbf{a} \mathbf{a}^{\dagger} + \sigma_n^2 \mathbf{I}

where \mathbf{R}_v is the visibilities covariance matrix (sources), \sigma_s^2 is the power of the interferer, and \sigma_n^2 is the noise power, and \dagger denotes the Hermitian transpose. One can construct a projection matrix \mathbf{P}_a^{\perp}, which, when left and right multiplied by the signal covariance matrix, will reduce the interference term to zero.

\mathbf{P}_a^{\perp} = \mathbf{I} - \mathbf{a}(\mathbf{a}^{\dagger} \mathbf{a})^{-1} \mathbf{a}^{\dagger}

So the modified signal covariance matrix becomes:

\tilde{\mathbf{R}} = \mathbf{P}_a^{\perp} \mathbf{R} \mathbf{P}_a^{\perp} = \mathbf{P}_a^{\perp} \mathbf{R}_v \mathbf{P}_a^{\perp} + \sigma_n^2 \mathbf{P}_a^{\perp}

Since \mathbf{a} is generally not known, \mathbf{P}_a^{\perp} can be constructed using the eigen-decomposition of \mathbf{R}, in particular the matrix containing an orthonormal basis of the noise subspace, which is the orthogonal complement of \mathbf{a}. The disadvantages to this approach include altering the visibilities covariance matrix and coloring the white noise term.[12]

Spatial Whitening

This scheme attempts to make the interference-plus-noise term spectrally white. To do this, left and right multiply \mathbf{R} with inverse square root factors of the interference-plus-noise terms.

\tilde{\mathbf{R}} = (\sigma_s^2 \mathbf{a} \mathbf{a}^{\dagger} + \sigma_n^2 \mathbf{I})^{-{\frac{1}{2}}} \mathbf{R}(\sigma_s^2 \mathbf{a} \mathbf{a}^{\dagger} + \sigma_n^2 \mathbf{I})^{-{\frac{1}{2}}}

The calculation requires rigorous matrix manipulations, but results in an expression of the form:

\tilde{\mathbf{R}} = (\cdot)^{-{\frac{1}{2}}} \mathbf{R}_v(\cdot)^{-{\frac{1}{2}}} + \mathbf{I}

This approach requires much more computationally intensive matrix manipulations, and again the visibilities covariance matrix is altered.[13]

Subtraction of Interference Estimate

Since \mathbf{a} is unknown, the best estimate is the dominant eigenvector \mathbf{u}_1 of the eigen-decomposition of \mathbf{R} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^{\dagger}, and likewise the best estimate of the interference power is \sigma_s^2 \approx \lambda_1 - \sigma_n^2, where \lambda_1 is the dominant eigenvalue of \mathbf{R}. One can subtract the interference term from the signal covariance matrix:

\tilde{\mathbf{R}} = \mathbf{R} - \sigma_s^2 \mathbf{a} \mathbf{a}^{\dagger}

By right and left multiplying \mathbf{R}:

\tilde{\mathbf{R}} \approx (\mathbf{I} - \alpha \mathbf{u}_1 \mathbf{u}_1^{\dagger})\mathbf{R}(\mathbf{I} - \alpha \mathbf{u}_1 \mathbf{u}_1^{\dagger}) = \mathbf{R} - \mathbf{u}_1 \mathbf{u}_1^{\dagger} \lambda_1(2 \alpha - \alpha^2)

where \lambda_1(2 \alpha - \alpha^2) \approx \sigma_s^2 by selecting the appropriate \alpha. This scheme requires an accurate estimation of the interference term, but does not alter the noise or sources term.[14]

Summary

Array processing technique represents a breakthrough in signal processing. Many applications and problems which are solvable using array processing techniques are introduced. In addition to these applications within the next few years the number of applications that include a form of array signal processing will increase. It is highly expected that the importance of array processing will grow as the automation becomes more common in industrial environment and applications, further advances in digital signal processing and digital signal processing systems will also support the high computation requirements demanded by some of the estimation techniques.

In this article we emphasized the importance of array processing by listing the most important applications that include a form of array processing techniques. We briefly describe the different classifications of array processing, spectral and parametric based approaches. Some of the most important algorithms are covered, the advantage(s) and the disadvantage(s) of these algorithms also explained and discussed.

See also

References

  1. 1 2 3 4 Torlak, M. Spatial Array Processing. Signal and Image Processing Seminar. University of Texas at Austin.
  2. J Li, P Stoica (Eds) (2009). MIMO Radar Signal Processing. USA: J Wiley&Sons.
  3. 1 2 3 4 5 6 7 8 9 10 P Stoica, R Moses (2005). Spectral Analysis of Signals (PDF). NJ: Prentice Hall.
  4. 1 2 3 J Li, P Stoica (Eds) (2006). Robust Adaptive Beamforming. USA: J Wiley&Sons.
  5. 1 2 3 4 5 6 7 8 9 Singh, Hema; Jha, RakeshMohan (2012), Trends in Adaptive Array Processing
  6. "About Us". NORSAR. Retrieved 6 June 2013.
  7. "Improving IMS array processing". Norsar.no. Retrieved 2012-08-06.
  8. 1 2 3 4 5 6 Krim, Hamid; Viberg, Mats (1995), Sensor Array Signal Processing: Two Decades Later
  9. Zelinski, Rainer. "A microphone array with adaptive post-filtering for noise reduction in reverberant rooms." Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on. IEEE, 1988.
  10. Parsons, Aaron; Backer, Donald; Siemion, Andrew (September 12, 2008). "A Scalable Correlator Architecture Based on Modular FPGA Hardware, Reuseable Gateware, and Data Packetization". arXiv:0809.2266. doi:10.1086/593053.
  11. Spectrometers for Heterodyne Detection Andrew Harris
  12. Jamil Raza, Albert-Jan Boonstra, Alle-Jan van der Veen (February 2002). "Spatial Filtering of RF Interference in Radio Astronomy". IEEE Signal Processing Letters 9 (12): 64–67. doi:10.1109/97.991140.
  13. Amir Leshem, Alle-Jan van der Veen (August 16, 2000). "Radio astronomical imaging in the presence of strong radio interference". IEEE Transactions on Information Theory 46 (5): 1730–1747. arXiv:astro-ph/0008239. doi:10.1109/18.857787.
  14. Amir Leshem, Albert-Jan Boonstra, Alle-Jan van der Veen (November 2000). "Multichannel Interference Mitigation Techniques in Radio Astronomy". Astrophysical Journal Supplement Series 131 (1): 355–373. arXiv:astro-ph/0005359. doi:10.1086/317360.

Sources

This article is issued from Wikipedia - version of the Sunday, January 31, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.