- •Contents
- •Preface
- •1.1 Smart Antenna Architecture
- •1.2 Overview of This Book
- •1.3 Notations
- •2.1 Single Transmit Antenna
- •2.1.1 Directivity and Gain
- •2.1.2 Radiation Pattern
- •2.1.3 Equivalent Resonant Circuits and Bandwidth
- •2.2 Single Receive Antenna
- •2.3 Antenna Array
- •2.4 Conclusion
- •Reference
- •3.1 Introduction
- •3.2 Data Model
- •3.2.1 Uniform Linear Array (ULA)
- •3.3 Centro-Symmetric Sensor Arrays
- •3.3.1 Uniform Linear Array
- •3.3.2 Uniform Rectangular Array (URA)
- •3.3.3 Covariance Matrices
- •3.4 Beamforming Techniques
- •3.4.1 Conventional Beamformer
- •3.4.2 Capon’s Beamformer
- •3.4.3 Linear Prediction
- •3.5 Maximum Likelihood Techniques
- •3.6 Subspace-Based Techniques
- •3.6.1 Concept of Subspaces
- •3.6.2 MUSIC
- •3.6.3 Minimum Norm
- •3.6.4 ESPRIT
- •3.7 Conclusion
- •References
- •4.1 Introduction
- •4.2 Preprocessing Schemes
- •4.2.2 Spatial Smoothing
- •4.3 Model Order Estimators
- •4.3.1 Classical Technique
- •4.3.2 Minimum Descriptive Length Criterion
- •4.3.3 Akaike Information Theoretic Criterion
- •4.4 Conclusion
- •References
- •5.1 Introduction
- •5.2 Basic Principle
- •5.2.1 Signal and Data Model
- •5.2.2 Signal Subspace Estimation
- •5.2.3 Estimation of the Subspace Rotating Operator
- •5.3 Standard ESPRIT
- •5.3.1 Signal Subspace Estimation
- •5.3.2 Solution of Invariance Equation
- •5.3.3 Spatial Frequency and DOA Estimation
- •5.4 Real-Valued Transformation
- •5.5 Unitary ESPRIT in Element Space
- •5.6 Beamspace Transformation
- •5.6.1 DFT Beamspace Invariance Structure
- •5.6.2 DFT Beamspace in a Reduced Dimension
- •5.7 Unitary ESPRIT in DFT Beamspace
- •5.8 Conclusion
- •References
- •6.1 Introduction
- •6.2 Performance Analysis
- •6.2.1 Standard ESPRIT
- •6.3 Comparative Analysis
- •6.4 Discussions
- •6.5 Conclusion
- •References
- •7.1 Summary
- •7.2 Advanced Topics on DOA Estimations
- •References
- •Appendix
- •A.1 Kronecker Product
- •A.2 Special Vectors and Matrix Notations
- •A.3 FLOPS
- •List of Abbreviations
- •About the Authors
- •Index
74 |
Introduction to Direction-of-Arrival Estimation |
Similarly, the one-dimensional selection matrices for the subarrays in the y direction are given by
M |
y ) = [0 I M sub y 0], 1≤ l y |
≤ L y |
|
Jl(y |
(4.16) |
Suppose that data matrix χ(tn) is obtained as derived in Section 3.3.2. One-dimensional spatial smoothing is then applied along the x direction rows of χ(tn) and the y direction columns of χ(tn) using the one-dimensional selection matrices defined in (4.15) and (4.16), respectively. For the L = Lx × Ly rectangular subarrays, the two-dimensional selection matrices for each of these subarrays can be obtained from the (Lx + Ly) one-dimensional selection matrices of linear arrays as
J |
|
= J |
(M y ) |
J |
(M x ) |
, 1≤ l |
|
≤ L |
|
, 1≤ l |
|
≤ L |
|
(4.17) |
|
l x ,l y |
l y |
l x |
x |
x |
y |
y |
|||||||||
|
|
|
|
|
|
|
|
where denotes the Kronecker product (see Appendix, Section A.1). The spatially smoothed data matrix is then given similar to (4.13) as
X ss =[J1,1 X J1, 2 X J1,L y X J 2,1 X J L x ,L y X ] C |
M sub × NL |
(4.18) |
|
This two-dimensional spatial smoothing scheme shall be used as the preprocessing step for two-dimensional ESPRIT algorithms.
4.3 Model Order Estimators
As discussed before, in the real world the number of signals impinging on the array is not known and has to be estimated from the signal data obtained from the array. Since all the DOA estimation algorithms requires a priori knowledge of this number, techniques whose function is to estimate the number of signals need to be run prior to running the DOA estimation algorithms. Such techniques are usually called model order estimation techniques. In the following section, a few of these techniques are described.
Preprocessing Schemes and Model Order Estimation |
75 |
|
|
4.3.1Classical Technique
This is the simplest and the most direct way of estimating the number of signals where the estimate can be determined from the data covariance matrix [8]. It is based on the fact that the number of signals impinging on an array is the same as number of “large” eigenvalues of the data covariance matrix. Hence, an estimate of the number of sources can be obtained from an estimate of the number of repeated small eigenvalues other than the large ones. As defined before, the spatial data covariance matrix is given as
R |
xx |
= E x(t )x H (t ) |
= AR |
ss |
A H + σ 2 |
I |
M |
(4.19) |
|
|
{ |
} |
|
N |
|
|
Theoretically, M − d small eigenvalues of Rxx have the minimum value equal to the noise variance σ 2N . In other words, if the multiplicity k
of this smallest eigenvalue is found, an estimate of the number of signals, d, can be obtained directly as
d = M − k |
(4.20) |
In practice, however, when the covariance data matrix Rxx is estimated from a set of finite data samples, the smallest eigenvalues corresponding to noise power will not be identical. Instead they will appear as a closely spaced cluster, with the variance of their spread decreased as the number of data samples is increased. Therefore, various statistical methods have been proposed to test for the equality or closeness of eigenvalues in order to determine the multiplicity k or simply the number d. Two such criteria are discussed in the following sections.
4.3.2Minimum Descriptive Length Criterion
The minimum descriptive length (MDL) criterion is an old and standard technique for estimating the number of signals [9]. In the MDL-based approach, the number of signals d is obtained as the value of d {0, 1, …, M − 1}, which minimizes the following criterion:
|
|
1 |
|
|
|
|
|
|
||
M |
|
( M − d ) |
|
|
1 |
|
|
|||
|
∏ i = d +1 λi |
|
|
|
|
|
||||
MDL (d ) = − log |
|
|
|
|
|
|
+ |
|
p(k ) log N |
(4.21) |
|
1 |
|
|
|
|
|||||
|
|
|
|
|
|
|
2 |
|
|
|
|
|
|
M |
|
|
|
|
|
||
|
|
∑ i = d +1 |
λi |
|
|
|
|
|
||
|
|
|
|
|
||||||
M − d |
|
|
|
|
|
|
|
|
76 |
Introduction to Direction-of-Arrival Estimation |
where M denotes number of the elements, N is the number of snapshots, λi are the eigenvalues of the estimated data covariance matrix, and p(k) is a function of number of independent parameters called penalty function. Depending upon the properties of the covariance matrix and preprocessing scheme applied, the penalty function is given by
d (2M − d + 1)
|
d (M + d + 1) |
|
|
p(k ) = |
d (2M − d ) |
|
|
|
|
|
− d + 1) |
0.5d (2M |
for real matrices |
|
for real FB − averaging |
|
|
|
for complex matricies |
(4.22) |
|
− for complex FB averaging
The MDL criterion is comprised of two parts. The first part is the log-likelihood function (4.21) and the second part is a penalty factor to punish the increase of the model order. The model order is obtained by varying the model order d and finding the d that minimizes the MDL criterion. When d is increased to d + 1 (more than the actual number of signals), the influence of an additional eigenvalue and the corresponding eigenvector is added to the data covariance matrix of the data model; the result will cause the first part of the MDL criterion to decrease but at the same time the second part to increase. Therefore, the model order d is a value that minimizes (4.21), a compromised fit to the data model and the penalty factor.
In case the impinging signals are coherent to each other, spatial smoothing or another preprocessing technique has to be applied to decorrelate the signals before applying (4.21). In this case, the number of elements M refers to the size of the subarrays and the number of snapshots in (4.21) is taken as the number obtained after applying smoothing and/or forward-backward averaging.
For a ULA with M elements and N snapshots, the maximum number of signals that can be estimated by this criterion is given by
d max = {M ,N } |
(4.23) |
For a URA with Mx × My elements and N snapshots, the maximum possible model order obtained by this criterion is given by
d max = {M x M y ,N } |
(4.24) |
Preprocessing Schemes and Model Order Estimation |
77 |
|
|
The MDL criterion is simulated in two experiments with a 10-ele- ment ULA under SNR of 0 dB. Results are averaged over 50 trials (or datasets) with 250 snapshots per trial (or dataset). In the first experiment, two uncorrelated signals were impinged on the array. In the second experiment, four uncorrelated signals were considered to be impinging on the array. As seen in Figure 4.4, the MDL function achieves a minimum at the numbers of the signals, 2 and 4, respectively. In other words, the MDL method successfully estimates the numbers of the signals impinging on the array.
4.3.3Akaike Information Theoretic Criterion
Similar to MDL, a technique called the Akaike Information Theoretic Criterion (AIC) was proposed for estimation of the number of signals or model order [10, 11]; the number of signals d is determined as the argument that minimizes the following criterion
|
|
1 |
|
|
( M − d )N |
|
||
M |
|
( M = d ) |
|
|
|
|||
|
∏ i = d +1 λi |
|
|
|
|
|||
AIC (d ) = − log |
|
|
|
|
|
|
+ p(k ) |
(4.25) |
|
1 |
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
M |
|
|
|
|
|
|
|
∑ i = d +1 |
λi |
|
|
|
||
|
|
|
||||||
M − d |
|
|
|
|
|
|
Here again, the first term is derived directly from a log-likelihood function, and the second term is the penalty factor added by the AIC.
[dB]
MDL function
30
25
20
15
10
5 |
|
|
|
|
|
|
|
|
|
|
0 |
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
|
Number of signals
[dB]
MDL function
30
28
26
24
22
20
18 |
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
−1 |
Number of signals
(a) |
(b) |
Figure 4.4 Histograms of the MDL for estimating the number of signals. (a) Two signals impinging and (b) four signals impinging.