- •Contents
- •Preface
- •1.1 Smart Antenna Architecture
- •1.2 Overview of This Book
- •1.3 Notations
- •2.1 Single Transmit Antenna
- •2.1.1 Directivity and Gain
- •2.1.2 Radiation Pattern
- •2.1.3 Equivalent Resonant Circuits and Bandwidth
- •2.2 Single Receive Antenna
- •2.3 Antenna Array
- •2.4 Conclusion
- •Reference
- •3.1 Introduction
- •3.2 Data Model
- •3.2.1 Uniform Linear Array (ULA)
- •3.3 Centro-Symmetric Sensor Arrays
- •3.3.1 Uniform Linear Array
- •3.3.2 Uniform Rectangular Array (URA)
- •3.3.3 Covariance Matrices
- •3.4 Beamforming Techniques
- •3.4.1 Conventional Beamformer
- •3.4.2 Capon’s Beamformer
- •3.4.3 Linear Prediction
- •3.5 Maximum Likelihood Techniques
- •3.6 Subspace-Based Techniques
- •3.6.1 Concept of Subspaces
- •3.6.2 MUSIC
- •3.6.3 Minimum Norm
- •3.6.4 ESPRIT
- •3.7 Conclusion
- •References
- •4.1 Introduction
- •4.2 Preprocessing Schemes
- •4.2.2 Spatial Smoothing
- •4.3 Model Order Estimators
- •4.3.1 Classical Technique
- •4.3.2 Minimum Descriptive Length Criterion
- •4.3.3 Akaike Information Theoretic Criterion
- •4.4 Conclusion
- •References
- •5.1 Introduction
- •5.2 Basic Principle
- •5.2.1 Signal and Data Model
- •5.2.2 Signal Subspace Estimation
- •5.2.3 Estimation of the Subspace Rotating Operator
- •5.3 Standard ESPRIT
- •5.3.1 Signal Subspace Estimation
- •5.3.2 Solution of Invariance Equation
- •5.3.3 Spatial Frequency and DOA Estimation
- •5.4 Real-Valued Transformation
- •5.5 Unitary ESPRIT in Element Space
- •5.6 Beamspace Transformation
- •5.6.1 DFT Beamspace Invariance Structure
- •5.6.2 DFT Beamspace in a Reduced Dimension
- •5.7 Unitary ESPRIT in DFT Beamspace
- •5.8 Conclusion
- •References
- •6.1 Introduction
- •6.2 Performance Analysis
- •6.2.1 Standard ESPRIT
- •6.3 Comparative Analysis
- •6.4 Discussions
- •6.5 Conclusion
- •References
- •7.1 Summary
- •7.2 Advanced Topics on DOA Estimations
- •References
- •Appendix
- •A.1 Kronecker Product
- •A.2 Special Vectors and Matrix Notations
- •A.3 FLOPS
- •List of Abbreviations
- •About the Authors
- •Index
Appendix
A.1 Kronecker Product
Kronecker products, also known as direct products or tensor products, are used frequently in this book, especially when working with 2D arrays.
Given two matrices A = [aij] C m × n and B = [bij] C p × q, the Kronecker product of A and B is defined as the partitioned matrix:
a11B a12B |
a n 1B |
|
|
|
|
|
|
A B = a 21B |
a 22B |
a n 2B |
(A.1) |
|
|
|
|
|
|
|
|
am 1B am 2B |
amn B |
|
Very often, Kronecker products are used with the vec{ } operator. Here, vec{A} denotes a vector-valued function that maps an m × n matrix A into an m n-dimensional column vector by stacking the columns of
the |
matrix A. Given Υ = [ y |
1ij |
] C y 1 × y 2 |
, |
Υ = [ y |
2ij |
] C y 2 × y 3 |
, and |
||||||
|
|
|
1 |
|
|
|
|
|
2 |
|
|
|
||
Υ |
= [ y |
3ij |
] C y 3 × y 4 , the following important identity relates the vec{ } |
|||||||||||
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
operator with the Kronecker products: |
1 ) |
|
|
|
|
|
||||||||
|
|
|
{ 1 2 |
Υ |
3} |
|
3 |
2 |
} |
|
|
|
||
|
|
|
vec Υ Υ |
|
= (ΥT Υ |
vec {Υ |
|
|
(A.2) |
179
180 Introduction to Direction-of-Arrival Estimation
A.2 Special Vectors and Matrix Notations
This section gives a brief summary of the notations used in this book. If for any positive integer p, Ip denotes the p × p identity matrix and Πp denotes the p × p exchange matrix with 1 on its antidiagonal and zeros elsewhere:
|
0 |
0 |
0 |
1 |
|
|
||
|
|
0 |
0 |
1 |
0 |
|
|
|
Π p |
|
|
R p × p |
|
||||
= |
|
(A.3) |
||||||
|
|
0 |
1 |
0 |
0 |
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
1 |
0 |
0 |
0 |
|
|
|
|
|
|
|
|
Πp is a symmetric matrix and has the property that Π 2p = I p . It is not
difficult to show that the premultiplication of a matrix by Πp will reverse the order of its rows, whereas the postmultiplication of a matrix by Πp will reverse the order of its columns.
A diagonal matrix Φd with the diagonal elements φ1, φ2, …, φd is denoted as
|
φ1 |
0 |
|
0 |
0 |
|
|
|
|
|
0 |
φ 2 |
|
0 |
0 |
|
|
Φ d |
|
|
|
|
|
|
|
C d × d or R d × d (A.4) |
= |
|
|
||||||
|
|
0 |
0 |
|
φ d −1 |
0 |
|
|
|
|
|
|
|
||||
|
0 |
0 |
|
0 |
φ |
|
|
|
|
|
|
|
|
|
d |
|
A.3 FLOPS
The easiest way to measure computational effort is to count the number of floating point operations (FLOPS) in an algorithm. A FLOP is an addition, subtraction, multiplication, or division. As a rough model of computational expenditure, it is assumed that each FLOP takes the same amount of computational time. Thus, the algorithms that have higher FLOP counts usually take longer to run than algorithms with lower FLOP counts.
Appendix |
181 |
|
|
For example, consider the inner product of two column vectors u and v with five elements each:
σ = u T v = u1v1 + u 2v 2 + u 3v 3 + u4 v4 + u 5v 5 |
(A.5) |
The computation of uTv appears to take five multiplications and four additions, or nine FLOPS in total. One more FLOP is hidden when the inner product is written in the preceding mathematical form. Hence, if u and v are of length n, then an inner product of u and v takes 2n FLOPS. The product of an m × r matrix and an r × n matrix involves mn inner products of length r, for a total of 2mnr FLOPS. If two matrices are square (i.e., m = n = r), then the matrix-matrix product takes 2n3, or O(n3) FLOPS. In discussing the number of FLOPS required by matrix operations, one is usually only concerned with a single significant digit in the work estimated. Greater precision is not useful, because the actual execution time can be greatly influenced by implementation details in the software and the design of the computer hardware.