Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / (Springer Series in Information Sciences 25) S. Haykin, J. Litva, T. J. Shepherd (auth.), Professor Simon Haykin, Dr. John Litva, Dr. Terence J. Shepherd (eds.)-Radar Array Processing-Springer-Verlag

.pdf
Скачиваний:
68
Добавлен:
27.03.2016
Размер:
14.79 Mб
Скачать

5. Systolic Adaptive Beamforming

241

Here 0' has been chosen such that the iteration of 0 remains unchanged in the square-root-free algorithm. Making the simple scale transformation

(S.C.3S)

It becomes clear that the factor g can be absorbed completely into the scale parameters 5 and 5' and therefore need never appear explicitly in cell computations: it appears only as an initialization of 5 at the first boundary cell, where it is entered with the associated input vector. Note again that only g, and not g1/2, need ever be used in computations.

Acknowledgements. The authors are indebted to their colleagues at STL-Dr. P. Hargrave, Dr. C. Ward, Dr. J. Hudson, and Dr. P. Gosling-for their essential contribution to the collaboration which has resulted in much of the material presented iiI this chapter. The authors should also like to thank STL for providing Figs. 5.6 and 5.19-21.

Appendix S.D Principal Symbols Used in this Chapter

A

A(n)

a(/)(n), a(l)(n)

B

c

C(J)

(I)

A

C, Ci'C, C

 

C, Ci' C, C

D(n), Do

e(n), e(l)(n) e(t)

e(n, n) e(n, n - 1) ej(n, n)

F

F{n)

(f(x», f(x)

G(n), G' g(q), g

Blocking matrix for constrained least squares Submatrix of unitary update matrix Q(n) Auxiliary vectors employed in MVDR processing Exponential deweighting matrix

Matrix of constraint vectors Condition number of given matrix J Constraint vector

ith element of constraint vector C

Cosine (or cosine-related) rotation parameter Diagonal matrix containing squares of R-matrix diagonal elements

Element of matrix D(n)

Cost function (otherwise known as "signal power" or "metric")

Residual error vector at time tn residual error at time t

a posteriori residual a priori residual

jth a posteriori sub-residual Submatrix of weighting matrix G' Update matrix for the matrix R(n) (Estimated) vector function of vector x

Weighting matrix for weighted least-squares Real-valued scalar weighting coefficient in weighted least-squares

242

T.J. Shepherd and J.G. McWhirter

gjj _

A

Coefficient for Gram-Schmidt orthogonaIization

H, H(n), H(n)

General metric-preserving transformation matrix

H (superscript)

Matrix Hermitian conjugation

In

 

n x n identity matrix

L

 

Submatrix of weighting matrix G'

L

 

Number of independent constraints in MVDR

 

 

beamformer

M(n)

 

Estimated data covariance matrix at time tn

M

 

MVDR control bit

Mo

 

Number of taps on broad-band beamformer

m,m'

 

Vector of linear constraint gains

N

 

Number of simultaneous constraints in constraint

 

 

pre-processor

Nc

 

Number of radial basis function centre vectors

n

 

Index of tn - time epoch of most recently received data

P(a, b)

Vector projection operator, orthogonalizing given

 

 

vectors a and b

p

eo, Q'Jn),

Total number of input data channels

e(n),

Unitary matrix

Q(n), Q(n), Q(n)

qj(n), (/Mn»

Orthogonalized (orthonormalized) vector

(R(n», R(n)

(Unit) upper triangular matrix

rj, rij' (rj)

Element of matrix R, (R)

S(n)

 

Submatrix of unitary matrix Q(n)

S

 

Time-invariant upper trapezoidal matrix

S, Si> S, S

Sine (or sine-related) rotation parameter

(f), T

Time-invariant (unit) upper triangular matrix

T (superscript)

Matrix transposition

tn

 

nth time epoch

U

 

Time-independent rectangular matrix

u(n), uj(n),

 

uG(n), u(n)

Part of data vector y(n) after rotation

V, jJ

 

Time-independent rectangular matrix

v(n), vj(n), vG(n)

Part of data vector y(n) after rotation

w, w(n)

Weight vector

X(n)

 

Data matrix containing n data vector "snapshots"

x(t), (x(tn ), x(n»

Vector "snapshot" of data at time t (tn )

Xj(tn), xj(n)

Vector of input data, from time t1 to tno in channel i

xi,xI

 

Centre and training data vectors, respectively, in radial

 

 

basis function algorithm

Xi> Xj, Xnj, Xj(tn)

ith channel input datum

y(tn), y(n)

Vector of primary channel input data, up to time tn

y(t)

 

Primary channel data value at time t

Z, i, Z, i(l)(n)

General vector output from time-independent or

"frozen" network

a(n), iX(n), ain)

p

rp(n)

Y, Yj

(j,{l

8m

1Ii

A (g)

A,l,(A,N)

A,

fl, fli, fl{l) l!(n)

tp(n)

q>(r)

tpi(n)

'P(g)

lJI(n)

5. Systolic Adaptive Beamforming

243

Rationalized residual

Exponential data deweighting factor

Matrix of Gram-Schmidt orthogonalization coefficients Specific element of Qmatrix. (Multiplier in square-root algorithm)

Multiplier in square-root-free algorithm

Residual vector in radial basis function algorithm Unit basis vector

Metric tensor

Maximum (minimum) singular value of matrix Accumulation parameter in MVDR processor Linear constraint gain

Estimated primary/auxiliary channel cross-correlation vector at time tn

Error criterion for constrained system Extended data matrix in Gram-Schmidt orthogonalization

Specific vector from column of Qmatrix Radial basis function

General data column of matrix 4>p(n) Generalized plane rotation matrix Specific vector from row of Qmatrix

References

5.1W.M. Gentleman, H.T. Kung: Matrix triangularization by systolic arrays, in Proc. SPIE, Vol. 298, Real Time Signal Processing IV (International Society for Optical Engineering, Bellingham, WA 1989) pp. 19-26

5.2F. Ling, D. Manolakis, J.G. Proakis: A recursive modified Gram-Schmidt algorithm for leastsquares estimation. IEEE Trans. ASSP-34, 829-836 (1986)

5.3S. Kalson, K. Yao: Geometrical approach to generalized least-squares estimation with systolic array processing, in Proc. of the 22nd Annual Allerton Conf. on Communications, Control and Computing (1984) pp. 333-342

5.4R. Schreiber, Wop. Tang: On systolic arrays for updating the Cholesky factorization. BIT 26, 451-466 (1986)

5.5K.C. Sharman, T.S. Durrani: Spatial lattice filter for high-resolution spectral analysis of array data. lEE Proc. F (London) 130, 279-287 (1983)

5.6S. Haykin: Adaptive Filter Theory (Prentice-Hall, Englewood Oilfs, NI 1986)

5.7S.P. Applebaum, D.I. Chapman: Adaptive arrays with main beam constraints. IEEE Trans. AP-24, 650-662 (1976)

5.8B. Widrow, 1.R. Glover, J.M. McCool, J. Kaunitz, C.S. Williams, R.H. Hearn, I.R. Zeidler, E. Dong, R.C. Goodlin: Adaptive noise cancelling: Principles and applications. Proc. IEEE 63, 1692-1716 (1975)

5.9H.T. Kung, C.E. Leiserson: Algoritl1ms for VLSI processor arrays, in Introduction to VLSI Systems, ed. by CA. Mead, L. Conway (Addison-Wesley, Reading, MA 1980) pp. 271-292

5.10S.Y. Kung, K.S. Arun, R.I. Gal-Ezer, D.V. Bhaskar-Rao: Wavefront array processor: Language, architecture, and applications. IEEE Trans. C-31, 1054-1066 (1982)

244T.J. Shepherd and J.G. McWhirter

5.11J.E. Hudson: Adaptive Array Principles (peregrinus, UK 1981)

5.12O.L. Frost: An algorithm for linearly constrained adaptive array processing. Proc. IEEE 60, 661-675 (1971)

5.13C.L. Lawson, R.J. Hanson: Solving Least Squares Problems (Prentice-HaIl, Englewood Cliffs, NJ 1974)

5.14I.S. Reed, J.D. Mallett, L.E. Brennan: Rapid convergence rate in adaptive arrays. IEEE Trans. AES-I0, 853-863 (1974)

5.15A.S. Householder: Unitary triangularization of a nonsymmetric matrix. J. ACM 5, 339-342 (1958)

5.16W. Givens: Computation ofplane unitary rotations transforming a general matrix to triangular form. J. Soc. Ind. Appl. Math. 6, 26-50 (1958)

5.17R. Schreiber, P.I. Kuekes: Systolic linear algebra machines in digital signal processing, in VLSI and Modern Signal Processing, ed. by S.Y. Kung, H.I. Whitehouse, T. Kailath (Prentice-Hall, Englewood Cliffs, NJ 1985) pp. 389-405

5.18W.M. Gentleman: Least-squares computations by Givens transformations without square roots. J. Inst. Math. Its Appl. 12, 329-336 (1973)

5.19C.R. Ward, P.J. Hargrave, J.G. McWhirter: A novel algorithm and architecture for adaptive

digital beamforming. IEEE Trans. AP-34, 338-346 (1986)

5.20D.T.L. Lee, B. Friedlander, M. Morf: Recursive ladder estimation algorithms. IEEE Trans. ASSP-29, 627-641 (1981)

5.21D.D. Falconer, L. Ljung: Application of fast Kalman estimation to adaptive equalization. IEEE Trans. COM-26, 1439-1446 (1978)

5.22J.G. McWhirter: Recursive least-squares minimization using a systolic array, in Proc. SPIE, Vol. 431, Real Time Signal Processing IV(lnt. Soc. for Optical Engineering, Bellingham, WA 1983) pp. 105-112

5.23J.G. McWhirter, T.J. Shepherd: Least-squares lattice algorithm for adaptive channel equalisation - A simplified derivation. lEE (London) Proc., Part F 130, 532-542 (1983)

5.24S.Y. Kung: VLSI Array Processors (Prentice-Hall, Englewood Cliffs, NJ 1988)

5.25A. Bjork: Solving linear least-squares problems by Gram-Schmidt orthogonalization. BIT 7, 1-21 (1967)

5.26R.A. Monzingo, T.W. Miller: Introduction to Adaptive Arrays (Wiley, New York 1980)

5.27T.I. Shepherd, J.G. McWhirter: A pipelined array for linearly constrained least-squares optimisation, in Proc. 1985 IMA Conf. on Mathematics in Signal Processing, ed. by T.S. Durrani, J.B. Abbiss, J.E. Hudson, R.N. Madan, J.G. McWhirter, T.A. Moore (Clarendon, Oxford 1987) pp. 457-483

5.28T.J. Shepherd, J.G. McWhirter: A systolic array for linearly constrained least-squares optimisation, in Proc. 1986 Int. Workshop on Systolic Arrays, ed. by W. Moore, A.M. McCabe, R. Urquhart (Adam Hilger, Bristol 1987) pp. 151-159

5.29C.W. Jim: A comparison of two LMS constrained optimal array structures. Proc. IEEE 65, 1730-1731 (1977)

5.30L.J. Griffiths, C.W. Jim: An alternative approach to linearly constrained adaptive beamforming. IEEE Trans. AP-30, 27-34 (1982)

5.31S. Kalson, K. Yao: A systolic array for linearly constrained least-squares fitting, in Proc. 1985 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Tampa, FL (1985) pp. 977-980

5.32C.Y. Tseng, L.J. Griffiths: A unification and comparison of several adaptive linearly-con- strained beamformer structures, in Proe. SPIE, Vol. 1152, Advanced Algorithms for Signal Processing IV, ed. by F.T. Luk (lnt. Soc. for Optical Engineering, Bellingham, WA 1989) pp.158-256

5.33N.L. Owsley: High-resolution spectrum analysis by dominant-mode enhancement, in VLSI and Modern Signal Processing, ed. by S.Y. Kung, H.J. Whitehouse, T. Kailath (Prentice-Hall, Englewood Cliffs, NJ 1985) pp. 61-82

5.34R. Schreiber: Implementation of adaptive array algorithms. IEEE Trans. ASSP-34, 1038-1045 (1986)

5. Systolic Adaptive Beamforming

245

5.35A.W. Bojanczyk, F.T. Luk: A novel MVDR beamforming algorithm, in Proc. SPIE, Vol. 826,

Advanced Algorithms and Architectures/or Signal Processing II, ed. by F.T. Luk (Int. Soc. for Optical Engineering, Bellingham, WA 1987) pp. 12-16

5.26J.G. McWhirter, T.J. Shepherd: Systolic array for MVDR beamforming. lEE (London) Proc. Part F 136, 75-80 (1989)

5.37B. Yang, J.F. Bohme: Systolic implementation of a general adaptive array processing algorithm, in Proc. 1988 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, New York (1988) pp. 2785-2789

5.38J.V. McCanny, J.G. McWhirter: Some systolic array developments in the United_Kingdom. IEEE Trans. C-20, 51-63 (1987)

5.39D.S. Broomhead, J.G. Harp, J.G. McWhirter, K.J. Palmer, J.G.B. Roberts: A practical comparison of the systolic and wavefront array processing architectures, in Proc. 1985 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, (Tampa, FL) pp. 296-299

5.40C.R. Ward, S.C. Hazon, D.R. Massey, A.J. Urquhart: Practical realizations of parallel adaptive beamforming systems, in Systolic Array Processing, Proc. 1989 Int. Conf. on Systolic Arrays (Prentice-Hall, Hemel Hempstead, UK 1989) pp. 3-12

5.41R.J. Lackey, H.F. Baurle, J. Barile: Application-specific super computer, in Proc. SPIE, Vol. 977, Real Time Signal Processing XI (Int. Soc. for Optical Engineering, Bellingham, WA 1989)

pp.187-195

5.42T.J. Shepherd, J.G. McWhirter, J.E. Hudson: Parallel weight extraction from a systolic adaptive beamformer, Proc. Second IMA Conf. on Mathematics in Signal Processing, University of Warwick, December 1988 (Oxford University Press, O.xford 1990)

5.43J.E. Hudson, T.J. Shepherd: Parallel weight extraction by a systolic least-squares algorithm, in Proc. SPIE, Vol. 1152, Advanced Algorithms and Architectures for Signal Processing I V, (Int. Soc. for Optical Engineering, Bellingham, WA 1989) pp. 68-77

5.44C.R. Ward, P.J. Hargrave, J.G. McWhirter, T.I. Shepherd: A novel accelerated convergence technique for adaptive antenna applications, Proc. 6th lEE Int. Conf. on Antennas and Propagation, University of Warwick, 1989, lEE (London) Conf. Publication No. 301 (1989) pp.331-335

5.45S.C. Pohlig: Hybrid adaptive feedback nulling in the presence of channel mismatch, in Proc. 1988 IEEE Conf. on Acoustics, Speech, and Signal Processing, New York (1988) pp. 1588-1591

5.46L.J. Griffiths: A simple adaptive algorithm for real-time processing in antenna arrays. Proc. IEEE 57,1696-1704 (1969)

5.47F. Ling, J.G. Proakis: A generalized multichannel least-squares lattice algorithm based on sequential processing stages. IEEE Trans. ASSP-32, 381-389 (1984)

5.48P.S. Lewis: QR algorithm and array architecture for multichannel adaptive least-squares lattice filters, in Proc. 1988 IEEE Conf. onAcoustics, Speech, and Signal Processing, New York (1988)

pp.2041-2044

5.49F. Ling: Systolic arrays for implementation of order-recursive least-squares adaptive filtering

algorithms, in Proc. Int. Conf. on Systolic Arrays, ed. by K. Bromley, S.Y. Kung,

E. Swartzlander (Computer Society Press, Washington, DC 1988) pp. 135-144

5.50H. Lev-Ari: Modular architectures for adaptive multichannel lattice algorithms. IEEE Trans. ASSP-35, 543-552 (1987)

5.51D. Mansour: A highly parallel architecture for adaptive multichannel algorithms, in Proc. 1986 IEEE Int. Conf on Acoustics, Speech, and Signal Processing, Tokyo (1986) pp. 2931-2934

5.52K.C. Sharman, T.S. Ourrani: A triangular adaptive lattice filter for spatial signal processing, in Proc. 1983 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Boston, MA (1983) pp. 348-351

5.53D.E. Rumelhart, G.E. Hinton, R.I. Williams: Learning internal representations by error propagation, in Parallel Distributed Processing: Vol. 1, ed. by D.E. Rumelhart, J.L. McClelland, (MIT Press, Cambridge, MA 1987) pp. 318-362

5.54D.S. Broomhead, D. Lowe: Multi-variable functional interpolation and adaptive networks. Complex Syst. 2, 321-355 (1988)

246T.J. Shepherd and J.G. McWhirter

5.55M.J.D. Powell: Radial basis functions for multi-variable interpolation: A review, in Proc. IMA Conf. on Algorithms for the Approximation of Functions and Data (Oxford University Press, Oxford 1987) pp. 143-167

5.56C.A. Michelli: Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approx. 2, 11-22 (1986)

5.57S. Renals: Radial basis function network for speech pattern classification. Electron. Lett. 25, 437-439 (1989)

5.58T.V. Ho, J. Litva: Systolic array for 2-D adaptive beamforming, in Proc. Int. Conf. on Systolic Arrays, ed. by K. Bromley, S.Y. Kung, E. Swartzlander (Computer Society Press, Washington, DC 1988) pp. 1-10

5.59B. Yang, J.G. Bohme: Linear systolic arrays for constrained least-squares problems, in Second IMA Conf. on Mathematics in Signal Processing, University of Warwick, December 1988 (Oxford University Press, Oxford 1990)

5.60J.E. VoIder: The CORDIC trigonometric computing technique. IRE Trans. Electron. Comput. EC-S, 330-334 (1959)

5.61M-J. Chen, K. Yao: Linear systolic array for least-squares estimation, in Proc. Int. Conf. on Systolic Arrays. ed. by K. Bromley, S.Y. Kung, E. Swartzlander (Computer Society Press, Washington, DC 1988) pp. 83-92

5.62C.M. Rader: Wafer-scale systolic array for adaptive antenna processing, in Proc. 1988 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, New York (1988) pp. 2069-2071

5.63D. Heller: Partitioning big matrices for small systolic arrays, in VLSI and Modern Signal Processing, ed. by S.Y. Kung, H.J. Whitehouse, T. Kailath, (Prentice-Hall, Englewood Cliffs NJ 1985) pp. 185-199

5.64N. Torralba, J.J. Navarro: A one-dimensional systolic array for solving arbitrarily large least mean square p.roblems, in Proc. Int. Conf. on Systolic Arrays, ed. by K. Bromley, S.Y. Kung, E. Swartzlander (Computer Society Press, Washington, DC 1988) pp. 103-112

5.65S.Y. Kung, R.J. Gal-Ezer: Eigenvalue, singular value and least squares solvers via the wavefront array processor, in Proc. Purdue Workshop on Algorithmically Specialized Parallel Computers, ed. by L. Snyder et a1. (Academic, New York 1985) pp. 201-212

5.66D.E. Heller, I.C.F. Ipsen: Systolic networks for orthogonal decompositions. SIAM J. Sci. Stat. Comput. 4, 261-269 (1983)

5.67S.M. Yuen, K. Abend, R.S. Berkowitz: A recursive least-squares algorithm with multiple inputs and outputs, and a cylindrical systolic implementation, IEEE Trans. ASSP-36, 1917-1923 (1988)

5.68S. Hammarling: A note on modifications to the Givens plane rotation, J. Inst. Math. Its Appl. 13, 215-218 (1974)

5.69W.M. Gentleman: Error analysis of QR decompositions by Givens transformations. Linear Algebra Its Appl. 10, 189-197 (1975)

5.70F.T. Luk, S. Qiao: Analysis of a recursive least-squares signal processing algorithm. SIAM J. Sci. Stat. Comput. 10,407-418 (1989)

5.71F.T. Luk, E.K. Tomg, CJ. Anfinson: A novel fault-tolerant technique for least-squares minimization, VLSI Signal Proc. 1, 181-188 (1989)

5.72K.J.R. Liu, K. Yao: Gracefully degradable real-time algorithm-based fault-tolerant method for QR recursive least-squares systolic array, in Systolic Array Processors, Proc. 1989 Int. C~nf. on Systolic Arrays, ed. by J.V. McCanny, J.G. McWhirter, E. Swartzlander (Prentice-Hall, Hemel Hempstead, UK 1989) pp. 401-410

5.73C.R. Ward, P.J. Hargrave, J.G. McWhirter: Adaptive beamforming using real arithmetic, in Proc. SPIE, Vol. 826, Advanced Algorithms and Architecturesfor Signal Processing II, ed. by F. Luk (lnt. Soc. for Optical Engineering, Bellingham, WA 1987) pp. 17-24

5.74R. Kumaresan, D.W. Tufts: Estimating the angles of arrival of multiple plane waves. IEEE Trans. AES-19, 134-139 (1983)

5.75R.O. Schmidt: A signal subspace approach to multiple emitter location and spectral estimation. Ph.D. Thesis, Stanford University (1981)

5.76F.T. Luk: A triangular processor array for computing singular values. Linear Algebra Its Appl. 77, 259-273 (1986)

5. Systolic Adaptive Beamforming

247

5.77G.D. de Villiers: A Gentleman-Kung architecture for finding the singular values ofa matrix, in Systolic Array Processors, Proc. 1989 Int. Conf. on Systolic Arrays. ed. by J.V. McCanny, J.G. McWhirter, E. Swartzlander (Prentice-Hall, Hemel Hempstead, UK 1989) pp. 545-554

5.78M. Moonen, P. Van Dooren, J. Vandewalle: Updating singular value decompositions. A parallel implementation, in Proc. SPIE, Vol. 1152, Advanced Algorithmsfor Signal processing IV, ed. by F.T. Luk (Int. Soc. for Optical Engineering, Bellingham, WA 1989) pp. 80-91

5.79R.E. Kalman: A new approach to linear filtering and prediction problems. Trans. ASME (J. Basic Eng.) 82D, 34-45 (1960)

5.80P.G. Kaminski, A.E. Bryson, S.F. Schmidt: Discrete square root filtering: a survey of current techniques. IEEE Trans. AC-16, 727-736 (1971)

5.81G.J. Bierman: Factorization Methodsfor Discrete Sequential Estimation (Academic, New York 1977)

5.82c.c. Paige, M.A. Saunders: Least-squares estimation of discrete linear dynamic systems using orthogonal transformations. SIAM J. Numer. Anal. 14, 181-193 (1977)

5.83D.B. Duncan, S.D. Horn: Linear dynamic1"eCursive estimation from the viewpoint of regression analysis. J. Am. Statist. Assoc. 67, 815-821 (1972)

5.84A. Andrew: Parallel processing of the Kalman filter, in Proc. 1981 Int. Cone. on Parallel Processing, Columbus, OH (1981) pp. 216-220

5.85J.M. Jover, T. Kailath: A parallel architecture for the Kalman filter measurement update, in Proc.'IFAC 9th Triennial World Congress on Adaptive Control, Budapest, Hungary (1984) pp. 1005-1010

5.86M.J. Chen, K. Yao: On realizations ofleast-squares estimation and Kalman filtering by systolic arrays, in Systolic Arrays, Proc. 1986 Int. Wokshop on Systolic Arrays, ed. by W. Moore, A.M. McCabe, R. Urquhart (Adam Hilger, Bristol 1987) pp. 161-170

5.87MJ. Chen, K. Yao: Systolic Kalman filtering based on QR decomposition, in Proc. SPIE, Vol.

826, Advanced Algorithms and Architecturesfor Signal Processing II, ed. by F.T. Luk (Int. Soc. for Optical Engineering, Bellingham, WA 1987) pp. 25-32

5.88T.Y. Sung, Y.H. Hu: VLSI implementation of real-time Kalman filter in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Tokyo (1986) pp. 2223-2226

5.89S.Y. Kung, J.N. Hwang: An efficient tri-array systolic design for real-time Kalman filtering, in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, New York (1988) pp.2045-2048

5.90F.M.F. Gaston, G.W. Irwin: A systolic square-root information Kalman filter, in Proc. Int. earif'. on Systolic Arrays, ed. by K.. Bromley, S.Y. Kung, E. Swartzlander (Computer Society Press, Washington, DC 1988) pp. 643-652

5.91P. Gosling, J.E. Hudson, J.G. McWhirter, T.J. Shepherd: Direct extraction of the state vector from systolic implementations of the square-root Kalman filter, in Systolic Array Processors, Proc. 1989 Int. Cone. on Systolic Arrays, ed. by J.V. McCanny, J.G. McWhirter, E. Swartzlander (Prentice-Hall, Hemel Hempstead, UK 1989) pp. 42-51

5.92G.W. Irwin, F.M.F. Gaston: A systolic architecture for square-root covariance Kalman filtering, in Systolic Array Processors, Proc. 1989 Int. Conf. on Systolic Arrays, ed. by J.V. McCanny, J.G. McWhirter, E. Swartzlander (Prentice-Hall, Hemel Hempstead UK 1989) pp. 255-263

5.93F.M.F. Gaston, G.W. Irwin: A systolic square-root covariance Kalman filter, to be published in Proc. Second IMA Conf. on Mathematics in Signal Processing, University of Warwick, December 1988 (Oxford University Press, Oxford 1990)

5.94A.O. Steinhardt: Householder transforms in signal processing. IEEE ASSP Mag. (July 1988)

6.Two-Dimensional Adaptive Beamforming: Algorithms and Their Implementation

T.V. Ho and J. Litva

With 19 Figures

Adaptive beamforming technology has been actively discussed in the literature for at least two decades, and is now, increasingly, finding applications in radar, sonar, and communications systems [6.1,2]. The reason for all of this interest lies in the ability of adaptive arrays to automatically steer nulls in the direction of interfering sources. Recently, with the rapid growth of VLSI technology, and particularly with the advent of systolic arrays, the use of VLSI array processors in adaptive beamforming has become a subject of considerable interest [6.3-5]. However, most of the work that has been carried out in the past has been concentrated in the area of adaptive beamforming with linear array antennas, i.e., the one-dimensional (1D) case. Since most antenna arrays are, in practice, planar arrays, the focus of future developments in adaptive beamforming must start to shift to the two-dimensional (20) case.

Two-dimensional adaptive beamforming is rarely discussed in the literature. It is thought that there are two reasons for the lack of 20 results. First, there is the general impression among workers in the field that the principles underlying the 20 case are a simple extension of the 10 case [6.6,7]. Secondly, it is felt that the only way around the complexity that is inherent to the 20 case is by means of subarraying, i.e., by reducing the degrees-of-freedom [6.8]. As a result, very little work has been carried out to optimize 2D adaptive beamforming techniques.

In the case of conventional adaptive beamforming, the computational overhead is proportional to the number of degrees-of-freedom. For a 2D array with L rows and M columns, the number of degrees-of-freedom is given by (LM - 1). A fully adaptive array is one in which every element of the array is individually controlled adaptively [6.9]. A partially adaptive array is one in which elements are controlled in groups (the subarray approach), or in which only certain elements, called auxiliary elements, are made controllable.

The reason for reducing an array's Degrees-Of-Freedom (OOF) usually revolves around the cost. If one uses conventional processors, it is not possible to achieve full dimensionality with arrays consisting of thousands of elements. First, one has the problem of designing a processor that has sufficient speed and accuracy for meeting the computational requirements of a fully functioning 2D array. Next, one has to ensure that the processor is not prohibitively expensive. If conventional techniques are used, these two requirements are usually mutually exclusive.

Springer Series in Information Sciences, Vol. 25

Radar Array Processing

Eds.: S. Haykin J. Litva T. J. Shepherd

© Springer-Verlag Berlin, Heidelberg 1993

250 T.V. Ho and J. Litva

A number of configurations have been considered for reducing the dimensionality of the array while maintaining as much control as possible over the size of a given aperture. Two that come to mind immediately are: (1) grouping the physically contiguous elements to form what are termed "simple arrays", and (2) grouping larger numbers of elements together to form "super arrays". In the latter case, the elements in each group may not necessarily be consistent with the array's natural lattice geometry. An example of a simple array is one consisting of the rows and columns of a 20 array. If 10 processing is first carried out on the L rows, the number of OOF is (L - 1), and if it is carried out on the M columns, as well, there is an additional (M - 1) OOF. In the case of a super array, the subarrays are formed after phase-shifting at the element level takes place. When the outputs ofthe arrays are digitized, they constitute a super array consisting of elements with patterns corresponding to the subarray patterns which are all steered to the same direction. Both of these arrangements preserve the homogeneous antenna front-end hardware, which is very desirable for the application of highly integrated modules.

One of the more attractive techniques or configurations studied is that of beam-space adaptive beamforming. The technique involves transforming a large array of N elements into an equivalent small array of J + 1 elements, where J is the number of jammers present [6.10]. In this technique, J auxiliary beams are formed using the whole array. The auxiliary beams are pointed at the unwanted signals, one beam for each signal. The outputs of the auxiliary beams, together with the main beam signal, form the adaptive transformed array. This can result in a considerable reduction in the computational overhead.

One of the major problems with the super array approach for reducing an array's dimensionality is that of grating lobes. Grating lobes are generated due to the spacing of the super arrays, thereby creating spurious notches which result in blind directions for antennas. For multiple jammers the number of blind directions very soon becomes intolerably high. Grating lobes can be avoided by irregular spacing of the super array, which results in irregular subarrays. This reduces the homogeneity of the antenna front-end hardware.

One of the disadvantages of applying 10 beamforming to a 20 array is that the auxiliary beams (eigenbeams) that are used to cancel interferers are fan beams. Ifthe interferers are located on either side ofthe main beam, 10 adaptive beamforming can be carried out successfully. On the other hand, when the interferers are located above or below the main beam in the plane of the fan shaped eigenbeams, main beam cancellation can take place. This may lead to a degradation in the &ignal-to-Interference Ratio (SIR) rather than to its enhancement, thereby defeating the purpose for applying adaptive beamforming to an array antenna in the first place. One way of overcoming this problem is by carrying out 10 adaptive beamforming in both planes, and then choosing the result that gives the best SIR. In this latter case, the processing overhead is proportional to (L + M - 2).

The problem of main beam cancellation can be totally circumvented, in the majority of instances, by employing 20 adaptive beamforming. One of the

6. Two-Dimensional Adaptive Beamforming

251

advantages of 20 beamforming lies in the fact that the eigenbeams are now pencil beams, which have higher gain than the fan beams that are used for 10 adaptive beamforming. Therefore, adaptive nulling of interferers in the antenna sidelobe region can take place without any cancellation of the main beam. Also, 2D adaptive nulling will result in the formation of deeper nulls for cancellation of interferers than in the case of 10 adaptive beamforming. The deeper nulls in the 20 adaptive beamforming case come about as a result of lowering the noise floor due to the higher gain of the pencil beams. Ultimately, it is the noise floor that sets the limit on the null depth that can be achieved.

It should be pointed out that the null depth that is achieved during beamforming does not depend, per se, on the algorithm used, i.e., whether it is an extended 10 algorithm or a 20 algorithm. What is important is that the full dimensionality of the array is preserved. In the case of the extended 10 algorithm, one does not have the advantage of being able to visualize the eigenbeams as in the case of the 20 algorithm which will be described in the chapter. As well, it is expected that the computational overhead for the extended 10 processor will be considerably greater than that for the 20 processor.

The optimum configuration, then, for adaptive nulling is the fully adaptive array, which, by definition, suppresses the interference by applying some matrix operation to all array element outputs. In theory, this provides the necessary OOF to lower all of the deterministic sidelobes to any arbitrary level, as well as nulling out unwanted signals. This is the approach that is being followed here. An adaptive beamformer based o~ a three-dimensional systolic array will be introduced, which has the potential for processing data from a fully adaptive 2D array.

6.1 Arrangement of the Chapter

The chapter is presented in four sections. The first section, which is now almost concluded, gave a short introduction to adaptive beamforming and has indicated a persistent need to derive a solution to the 20 problem. Section 6.2 reviews some of the key contributions in adaptive beamforming, such as the Howells-Apple~aums algorithm, the LMS (least mean square error) algorithm, the SMI (sample matrix inversion) algorithm, and others which are referred to as classical adaptive beaniforming techniques. The QRD-LS (QR decompositionleast squares) algorithm, a modem adaptive beamforming approach, and its systolic array implementation are introduced. Two-dimensional adaptive beamforming techniques are derived in Sect. 6.3. The 20 versions of the LMS algorithm and the Howells-Applebaum algorithm are developed in this section. The QRO-LS algorithm and its systolic array implementation for 2D adaptive beamforming are then presented. The concept of 20 eigenbeams is used to interpret the performance of the 20 adaptive nulling algorithm. Finally, simulation studies are given in Sect. 6.4 to demonstrate the performance of the adaptive beamforming algorithms developed in Sect. 6.3.