Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / (Springer Series in Information Sciences 25) S. Haykin, J. Litva, T. J. Shepherd (auth.), Professor Simon Haykin, Dr. John Litva, Dr. Terence J. Shepherd (eds.)-Radar Array Processing-Springer-Verlag

.pdf
Скачиваний:
68
Добавлен:
27.03.2016
Размер:
14.79 Mб
Скачать

262

T.V. Ho and J. Litva

 

not alter the least-squares minimization process, because

 

 

II Q(n)e(n) 112 = Ile(n)112.

(6.34)

It follows from (6.32b) that the condition for a minimum in the residual error is

u(n) - R(n)w(n) = 0

(6.35)

and hence

 

Ile(n)112= II v(n) 112 .

(6.36)

Equation (6.35) therefore defines the least-squares solution for the weight vector, and can be easily solved by back ~ubstitution, which is much easier than trying to solve the Wiener-Hopf equation described in Sect. 6.2.2. Equation (6.35) is also much better conditioned since the condition number of R(n) is given by

C(R(n)) = C(Q(n)X(n)) = C(X(n)) .

This property follows directly from the fact that Q(n) is unitary.

The triangularization process of (6.32) can be carried out using either the Householder transformations [6.34] or the Givens rotations [6.35]. The Givens rotation method, however, has been found to be particularly suitable for adaptive antenna applications since the triangularization process is recursively updated as each new row of data enters the computation. Details oftriangularization using Givens rotations are widely discussed in [6.3, 4, 29].

b) Systolic Array Implementation

The systolic array configuration that is well suited for solving the least-squares problem of (6.1), based on the use of the QR decomposition and Givens rotations, is shown in Fig. 6.5. This configuration was proposed by McWhirter in [6.30] and is a modification of an earlier processor [6.29]. The systolic array's structure takes the form of a triangle and consists of three types of processing elements (PEs), which are: (1) boundary cells, (2) internal cells, and (3) a final processing cell. The first two types of cells perform Givens rotations and store processed output data, while the final processing cell is simply a multiplier. It is to be noted that the vectors u(n) and v(n) are stored in the internal cells located on the right-hand side of the array; the boundary cells and other internal cells store the upper triangular matrix R(n), and the final processing cell computes the residual error vector e(n). In addition, it can be seen that the flow of input data is arranged in time-skew format to compensate for propagation delays. These are a consequence of the manner in which the data are processed as they pass through the PEs. The array is controlled by means of a global clock, as well as a synchronous clock. As well, additional delays are added to the signals which move from one boundary cell to another. These delays are indicated by the black dots in Fig. 6.5.

6. Two-Dimensional Adaptive Beamforming

263

input data flow from an array of antenna elements

y

Boundary Cell

ai~,,;n

~iD.=

1

 

X

- c

& C-Vin

om

- s

if C = D.

S ... O. Clout'"

 

 

<Is.

 

2

 

 

Qout

X'

-

X + "InlCl

 

 

 

 

 

 

 

 

S

-

(tinvin/X»

 

Interna.l Ce11

 

QoutXain!X'

 

 

 

 

 

 

 

 

 

 

'Uin - ex

 

 

 

X'

 

x + S.out

 

Fig. 6.5. Triangular systolic implementation for adaptive beamforming

If we make use ofthe square-root-free algorithm [6.36], the upper triangular matrix R(n) can be factorized as follows:

R(n) = Dl/2(n)K(n) ,

(6.37)

where

Dl/2(n) = diag{ R(n)}

(6.38)

and K(n) is an upper triangular matrix with elements in the diagonal equal to one. In this version of the algorithm, the elements of the matrix K(n) are stored in both the boundary cells and the internal cells of Fig. 6.5, and those of the diagonal matrix Dl/2(n) are stored in the boundary cells alone. The computations that are performed by the processing cells, when using the square- root-free algorithm, are given in Fig. 6.5.

264 T.V. Ho and J. Litva

It is worth noting that the systolic structure of Fig. 6.5 is used primarily to compute the residual error e(n). However, it can also be used to extract the weight vector w(n). This can be achieved at any time after initialization of the upper triangular matrix R(n) takes place (i.e., n ~ 2M), with no additional

control of the array required. If,

for example, at time t,.

the update of R(n)

is suppressed by setting (Xl = 0

with the input vectors

becoming y(n) = 0,

x/(n) = 1, and xj(n) = 0 if j '# i; then (6.1) becomes e(n) = -w;(n), from which the tap weights follow directly. This is the so-called weight flushing method for extracting the weight vector [6.3]. It does not require the use of a linear systolic section as proposed in [6.29]. In fact, the weight flushing can easily be carried out by using the QRD-LS systolic structure which is implemented using the square-root-free form of the Givens rotation algorithm. It should be noted in Fig. 6.5 that when (Xl is set equal to 0, all elements of matrix R(n) are frozen, i.e., unadapted. At the moment that (Xl is reset to 1, the adaptation process will resume, and will not be affected thereafter by the weight flushing procedure that had been previously implemented.

6.32D Adaptive Beamforming Algorithm and Implementation

6.3.1Introduction

The techniques described here for implementing 2D adaptive beamforming are considered to be new. Two-dimensional adaptive beamforming is rarely discussed in the literature. When it is, it is usually treated as a ID problem. A 20 adaptive beamforming technique based on the QRO-LS algorithm was recently introduced by Ho and Litva [6.37]. Again, only recently, a discrete form of an 20 adaptive LMS algorithm was proposed and presented in [6.38] for use in image processing.

Virtually all of the past work has been concentrated on the 10 adaptive beamforming problem. One of the earliest discussions in the literature on 20 adaptive beamforming was provided by Chapman [6.9]. There was not much discussion beyond this earlier paper. Recently, though, the 20 problem was taken up again, albeit in a cursory manner [6.39, 40]. One of the few early treatments was provided in [6.9]. These workers proposed a solution that was simply an extension of the 10 method. Their solution was based on lexicographic ordering of the adaptive weight matrix. In [6.9], the 20 adaptive beamforming problem was further analyzed by using the subarray transformation method to reduce the complexity in computation and implementation. Also, the simple case was treated where the adaptation process takes place on the rows and columns of the array, i.e., the contiguous row elements are combined together to form subarrays, and the column elements are combined in a similar manner.

In this section, we will first develop stand-alone 20 adaptive beamforming algorithms, and then show their relationship to the ID algorithms. Both the

6, Two-Dimensional Adaptive Beamforming

265

b)

array of elements

column

Fig. 6.6a, b. Configuration of a 2D array

classical and the modem approaches, namely (a) the 2D LMS, 2D Applebaum, and (b) the 2D QRD-LS algorithms will be developed and presented. In the latter case, a design for a systolic array implementation will also be presented.

The configuration for a 2D antenna array with dimensions Lx M is given in Fig. 6.6a. The angles°-of-arrival of signals impinging onto the array are described by the polar angle and the azimuthal angle 4>. The far field signal received at the array element 1m is

K

2"

+ Vlm(n)

(6.39)

+ L

Akej"'T[(l-1)d"cosak+(m-1)dycoS/lk+t/lkl

k=l

for 1= 1, 2, ... , L, and m = 1, 2, ... , M, where dx, dy are element spacings along the rows and columns, respectively, and cos (Xi = sin 0i cos 4>i and cos Pi = sin 0i sin 4>i in which 0i and 4>i are the elevation and azimuthal angles of

266 T.V. Ho and J. Litva

arrival. Also in (6.39), v1m(n) is the receiver noise component assumed to be Gaussian, with zero mean, and variance (12.

At a glance, (6.39) indicates that each snapshot for the 2D array is a 2D array of numbers. Therefore, the data matrix X(n) is a 3D matrix. It follows from the ID case that the error signal for a 2D array can be expressed as

L M

 

e(n) = y(n) - L L Wlm(n)Xlm(n).

(6.40)

1=1 m=1

 

A statement of the optimization procedure required for implementing 2D adaptive beamforming is as follows. Given a primary vector y(n) = {y(n)} and a 3D data matrix X(n) consisting of data matrices X(n) = {x1m(n)}, estimate the

adaptive weight matrix W(n) consisting of weight elements w1m(n), which minimizes the residual power Ile(n)11 2 at the output of the beamformer. Note that the

primary signal y(n) described in (6.40) is obtained by using either a high gain antenna (feed hom) or a primary planar array.

6.3.2 Classical Approaches

a) 2D LMS Algorithm

Using the ID LMS algorithm as a guide, the 2D LMS algorithm can be defined in the following way. Estimate the weight matrix W(n) of a Lx M array in such a manner as to minimize the least-mean-squares of the output signal e(n) defined by (6.40). The LMS estimate for (6.40) is given by

E[le(nW] = E[ly(n) - It1 mt1 w/m(n)xlm(n)12]

= E[ {y(n) - J1mt1 wlm(n)Xlm(n)}

x {y*(n) - It1 mt1 w/':.,(n)x/':.,(n)}]

LM

=E[ly(nW] - L L wlm(n)E[y*(n)Xlm(n)]

1=1m=1

LM

-L L w/':.,(n)E[y(n)xrm(n)]

1=1 m=1

6. Two-Dimensional Adaptive Beamforming

267

In discrete form, the weight matrix can be updated as follows:

W(n + 1) = W(n) - ~ V(n)

and adaptive weight element wlm(n) is of the form

where V(n) is a 2D instantaneous gradient matrix defined by

V(n) = {Vzm(n)} = {a:~ {E[le(nW]} } .

Moreover, it follows from (6.40,41) that

a

~ {E[le(nW]} = -2E[e(n)x~(n)] . UWlm

Thus,

Vzm(n) = -2E[e(n)x~(n)] .

Substitution of (6.44b) and (6.40) into (6.42b) yields wlm(n + 1) = wlm(n) - JL(Plm - pt1qt1 Wpqrp_I,q_m)

in which

Plm = E[y(n)x~(n)]

and

(6.42a)

(6.42b)

(6.43)

(6.44a)

(6.44b)

(6.45)

(6.46a)

(6.46b)

It follows from (6.45) that the optimal solution for the weight elements satisfies

(6.47)

which is the 2D Wiener-Hopf equation of the first kind [6.41].

It should be noted that Plm and rp-I,q-m described in (6.46, 47) are the elements of the cross-correlation matrix P, and the correlation matrix ~, respectively, i.e.,

(6.48)

268 T.v. Ho and J. Litva

and

 

[Q)o]

. ..

[Q)L_I]]

 

 

[Q)-l]

 

[

. ..

[Q)L- 2]

(6.49)

 

[~-L+l]

 

 

=

 

[CPo]

 

It can be seen that Q) is of Block Toeplitz structure [6.41] of dimensions LM x LM, and the partitions Q)i are Toeplitz matrices of dimensions M x M, where the index i is computed as (p - 1), i.e.,

Q)i = Q)P-I = {ri,q-m}

 

= {E[xt,.(n)xpq(n)]} .

 

 

 

 

 

(6.50)

In matrix form, (6.47) becomes

[Q)L[Q)L-2]_I]][ W2w~]_ [PI]P2

 

 

[Q)-I]

....

 

 

[Q)o]

 

 

 

 

 

 

[

 

 

·

.

-

. ,

(6.51)

 

[~-L+d

 

·

 

.

 

 

 

[Q)o]

w2

 

PL

 

where the W?T'S denote the row vectors of the optimum weight matrix Wo, and the PT's are the row vectors of the cross-correlation matrix P, Le.,

WOT = [w~, wg, ... ,w2]

(6.52a)

and

 

pT = [Pl,P2' ... ,pd

(6.52b)

with

(6.52c)

and

(6.52d)

As it can be seen in (6.42), the adaptive weight elements w1m(n) are computed by operating on all elements ofthe 2D antenna array, as denoted by the gradient ~m(n) in (6.44). The discrete form of the 2D LMS algorithm, as given by (6.42), was recently proposed in [6.38]. This algorithm has been found to be costeffective and useful in image processing, especially in data compression and image enhancement applications.

6. Two-Dimensional Adaptive Beamforming

269

As in the case of the ID LMS algorithm, the step size parameter Jl in (6.42) must be chosen within the range 0 and 2/Tr«(f). However, from (6.49), it is obsel'"Ved that

Tr((f) = L Tr((f)o) .

(6.53)

Hence,

(6.54)

b) Relationship with ID LMS Algorithm

Equation (6.42), which gives the weight updates for the 2D case, can be derived directly from the corresponding expression for the ID case. We proceed by converting the weight matrix Wen) and the data matrix X(n) to LM x 1 column vectors by lexicographic ordering. These are denoted by wy(n) and Xv (n), respectively, and given by

and

xy(n) = [X11 (n), x12(n), ... ,xl,M(n), X2, 1 (n), ... ,XL,M(n)]T .

(6.56)

It follows from (6.9) that (6.40) becomes

 

e(n) = yen) - x;(n)wvCn)

(6.57)

and that

 

 

(6.58)

in which

VvCn) = -2E[x:(n)e(n)]

(6.59)

is the instantaneous gradient vector. Thus, the optimum weight vector we satisfies the equation

 

(6.60)

where (f)y and py are,

respectively, the covariance matrix of dimensions

LM x LM, and the correlation vector of dimensions LM x 1 given by

(f)y = E[x:(n)x;(n)]

(6.61)

py = E[y(n)x:(n)] .

(6.62)

270 T.V. Ho and J. Litva

It is interesting to note that the matrix tPv in (6.61) is mathematically equivalent to the matrix tP in (6.49); the weight vector of(6.60) is mathematically equivalent to the weight matrix of(6.51) as well. It follows, then, that the analysis procedures and results for the 1D LMS algorithm can be applied to the 2D LMS algorithm.

c) 2D Howells-Applebaum Algorithm

It follows from the derivation ofthe 2D LMS algorithm that adaptive weights of the 2D Howells-Applebaum algorithm can be expressed in a recursive form as

W(n + 1) =

J1.

(6.63)

W(n) - 2" Vs(n) ,

where

Vs(n) = -2aS*(Bd, cPd) + 2e(n)X*(n) ,

in which

is the steering matrix in the direction of the desired signal, denoted by spherical angles Bd and cPd'

e(n) = L

M

 

L wlm(n)xlm(n)

(6.65a)

l=lm=l

and

(6.65b)

are the combined output signal, and the received data signals at the 2D array, respectively.

Adaptive weight elements w,m(n) are then updated in the form

(6.66)

where

(6.67)

Hence, optimum weights wPm can be found by solving the equation

(6.68)

6. Two-Dimensional Adaptive Beamforming

271

where rp - l • q - m denote elements of the covariance matrix 4> of the receiver signals, which has the form of (6.49).

In matrix form, the optimum adaptive weight matrix W O is found by solving

[4>oJ

...

[4>L_1J][W?]

[S!(Od'cPd)]

 

[ [4>-lJ

 

...

[4>L-2J

w~

SHOd' cPd)

(6.69)

 

 

·

. =CI.

. ,

 

 

·

.

.

 

[~-L+1J

 

.

.

 

 

[4>oJ

w2

SHOd' cPd)

 

where wp, SnOd' cPd) are row vectors of the adaptive weight matrix WO and of the steering matrix S*(Od' cPd), respectively, i.e.,

WOT = [w?, w~, ... ,w2J

(6.70)

and

(6.71)

The relationship between the 2D and ID Applebaum algorithms can be derived in the same manner as for the case of the LMS algorithm. Using the

result derived in the last section, we substitute a LM x 1 lexicographic ordered form of the steering vector Cl.S:(Od, cPd) for the cross-correlation vector Pv in

(6.60).

d) 2D Eigenvector Beam

The SVD of the covariance matrix 4> of (6.2.38b) is given by

(6.72)

where U is a unitary matrix of dimensions LM x LM, whose columns are eigenvectors of the covariance 4>, i.e.,

(6.73)

and 1: is a diagonal matrix of dimensions LM x LM,

(6.74)

Equation (6.72) can be written in the form

(6.75)