Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / (Springer Series in Information Sciences 25) S. Haykin, J. Litva, T. J. Shepherd (auth.), Professor Simon Haykin, Dr. John Litva, Dr. Terence J. Shepherd (eds.)-Radar Array Processing-Springer-Verlag

.pdf
Скачиваний:
68
Добавлен:
27.03.2016
Размер:
14.79 Mб
Скачать

5. Systolic Adaptive Beamforming

191

and

X(n) = [Xl (n), X(n)] .

(5.131)

The residual vector in (5.125) may now be written as

e(n) = Xl(n)wl + X(n)w

(5.132)

and the constraint as

 

 

(5.133)

Thus, assuming cl :/= 0, W l

may be eliminated to express e(n) in the canonical

form

 

e(n) = X'(n) w + y'(n) ,

(5.134)

where

 

 

(5.135)

and

(5.136)

Since the reduced weight vector wis not subject to any further constraint, it follows from (5.134) that the norm of e(n) can now be minimized using a canonical processor of order p. The auxiliary channel inputs take the form of successive rows X,T(t;) of the transformed data matrix X(n) while the primary channel input comprises the corresponding elements y'(t;) of the vector y(n). These inputs may be expressed in the form

 

(5.137)

and

 

y'(ti ) = ~ Xl(t;) ,

(5.138)

Cl

 

where, by analogy with (5.131), we have defined

 

XT(ti) = [Xl (ti), .fT(ti)] .

(5.139)

The a posteriori constrained residual may thus be written in the (unconstrained) canonical form

(5.140)

192 T.J. Shepherd and J.G. McWhirter

where w(n) denotes the optimum (least-squares) value of the weight vector w in (5.134). The transformed data vector [X'T(ti ), y(t;)] may be obtained from the corresponding input vector XT(ti ) using the type of processor network represented by the signal flow graph in Fig. 5.12. This serves as a simple preprocessor which converts the linearly constrained problem into an equivalent canonical problem as defined by (5.134). However, it is clearly seen that the function of the pre-processor network in Fig. 5.12 is identical to that of a single row of the "frozen" triangular network specified in Fig. 5.7. Hence, the entire linearly constrained least-squares minimization may be carried out in terms of conventional Givens rotations using a single canonical processor of order p + 1, the constraint vector being stored in the top row of cells which operate in their non-adaptive, systolic mode. Alternatively, by comparing the constraint preprocessor cells in Fig. 5.12 with those in Fig. 5.10, it is clear that the constrained minimization may also be carried out using a canonical processor of order p + 1 based on square-root-free Givens rotations. In this case, however, the top row of non-adaptive cells must store the "normalized" vector [1, c/c1 ].

The constraint pre-processor concept described in this silbsection will obviously break down if the leading element C1 of the constraint vector c takes zero value. However, the problem may be alleviated by generalizing the method in a very simple manner, and this technique is discussed in Appendix 5.B.

o

CANONICAL LEAST SQUARES

PROCESSOR

x-cz

RESIDUAL

Fig. S.12. System with single-constraint pre-processor

5. Systolic Adaptive Beamforming

193

5.7.2 Multiple Constraint Pre-Processor

Now consider the case of multiple (simultaneous) linear constraints which may be expressed in matrix form as

Cw=m,

(5.141)

where

(5.142)

C is an N x p matrix of constraint vectors, and

(5.143)

is a vector of linear gains. It is assumed throughout the following that the principal N x N submatrix of Cis nonsingular. The constraint matrix may then be rendered trapezoidal by the process of orthogonal decomposition of its first N columns. In particular, there exists an N x N unitary matrix Qo such that

Qo[C, m] = [C', m'] ,

(5.144)

where

C' = [T, V]

(5.145)

and T is an N x N upper triangular matrix. In effect, multiplication of C by Qo performs a QR decomposition of the principal N x N submatrix, and the constraint equation (5.141) thus becomes

C'w=m' .

(5.146)

Let the weight vector be partitioned as follows:

(5.147)

where Wa and Wb are N-element and (p - N)-element vectors respectively. It is then clear from (5.145) that the constraint equation (5.146) may be written in the form

 

(5.148)

and, since Tis nonsingular, we obtain the expression

 

Wa = _T- 1 VWb + T- 1 m' .

(5.149)

When the data matrix X(n) Xa{n) and the n x (p - N)

is also conformably partitioned into the n x N matrix matrix Xb(n) as defined by

(5.150)

194 T.J. Shepherd and J.G. McWhirter

the constrained residual vector e(n) in (5.125) may be written in the form

(5.l51)

Substituting for Wa from (5.149) into (5.151) then yields the expression

e(n) = X'(n) Wb + y'(n) ,

(5.152)

where

 

 

(5.153)

and

 

y'(n) = Xa(n) T- 1 m' .

(5.154)

Since the reduced weight vector Wb is not subject to any further constraints, it follows from (5.l52) that minimization ofthe norm of e(n) may be achieved using a canonical processor of order (p - N + 1). The auxiliary channel inputs are taken as successive rows X,T (tJ of the transformed matrix X' (n) and the primary channel inputs are successive elements y'(t;) of the vector y'(n}. These input vectors may be expressed in the form

(5.155)

and

(5.156)

where xr(t;) and xr(t;) denote the ith rows of the matrices Xa(n) and Xb(n) respectively, i.e.,

(5.l57)

The a posteriori constrained residual may thus be written in the (unconstrained) canonical form

(5.158)

where wb(n) is the optimum (least-squares) weight vector derived from (5.152). The data transformation defined in (5.155) and (5.156) may be implemented very efficiently using the type of pre-processor array illustrated by means of the signal flow graph in Fig. 5.13. It constitutes a fixed trapezoidal network of the type defined in Fig. 5.9 and is organized to store the transformed constraint matrix C' = [T, VJ and the corresponding transformed gain vector m' as indicated. From the discussion in Sect. 5.4.2, it can easily be shown that if a data vector [XT(ti ), OJ is input to this array from the top as indicated in Fig. 5.13, then

5. Systolic Adaptive Beamforming

195

I

-~

PRE-PROCESSOR (N rows)

RESIDUAL

Fig. S.13. System with multiple-constraint pre-processor

the appropriate transformed data vector [X'T (ti ), y,T(ti )] emerges as the corresponding output vector from below.

Consider the combination of a fixed trapezoidal network as illustrated in Fig. 5.13 and a canonical adaptive least-squares processor of order p - N + 1 based on conventional Givens rotations. This simply constitutes a canonical least-squares processor of order p + 1 the top N rows of which store the trapezoidal matrix [C', m'] and operate systolically in their frozen mode. It remains to point out that the trapezoidal matrix [C', m'] may be obtained from the original rectangular constraint matrix [C, m] by initially using the canonical processor of order p + 1 in its fully adaptive mode. Successive rows of [C, m] are input as data to the adaptive network which performs the appropriate QR decomposition quite naturally and stores the resulting matrix [C', m'] within the top N rows. These pre-processor rows are then assigned to their frozen mode of operation while the lower (p - N) remain adaptive. The sequence of data vectors [XT(ti), O] is input to the order p + 1 triangular processor in the usual manner but the parameter Yin is initialized to unity on the (N + l)th row. Clearly, then, the triangular systolic array in Fig. 5.4 constitutes a very powerful processing structure capable of implementing a general linearly constrained least-squares minimization.

Similarly, it can be shown that the square-root-free systolic array in Fig. 5.5 may be applied to this more general computation. In this case, the top N rows of

196 T.J. Shepherd and J.G. McWhirter

a (p + 1) x (p + 1) triangular array are used adaptively to perform an orthogonal transformation of the form

 

(S.1S9)

where Do is a diagonal matrix,

 

C' = [T, V],

(S.160)

and Tis an N x N unit upper triangular matrix. The matrices Do and C' and the vector iii' are then stored within the top N rows ofthe array in the usual manner. The constraint equation may now be written in the partitioned form

(S.161)

and so the discussion above applies equally well to the square-root-free case if T, V, and iii' are substituted for T, V and m' in all equations. As described in Sect. S.6.2, the top N rows of a square-root-free processor array may be used (in frozen mode) to apply the corresponding transformation to each input data vector [XT(ti ), 0]. In effect, they serve as a constraint pre-processor for the remaining p - N + 1 rows which operate adaptively and constitute a canonical least-squares processor. The parameter 0 must, of course, be initialized to unity on input to the top row of this adaptive sub-array.

The constraint pre-processor technique described in this subsection requires the principal N x N submatrix of e to be nonsingular, otherwise at least one of the leading diagonal elements of the transformed constraint matrix C' will be zero. We note that for a linear antenna array of uniformly spaced elements [S.11], the principal N x N sub-matrix e[N] of e is proportional to a matrix of the Vandermonde type,

e[N] oc [ i

0"1

O"I

 

 

0"2

O"~

O"~

 

 

 

'~l. ,

(S.162)

 

O"N

O"~

O"~

 

where

 

 

 

 

O"j = exp(iwj)

 

 

 

(S.163)

for N phases WI' W2' ••• WN' and, provided that no two Wj are equal (i.e., they correspond to different look directions), these matrices are known to be nonsingular. However, the singularity problem could arise in more general circumstances and Appendix S.B is devoted to showing how it may be avoided in practice.

5.7.3 Generalized Sidelobe Canceller

The constraint pre-processor technique described in this section is a special case of the Generalized Sidelobe Canceller (GSLC) proposed by Jim and Griffiths

5. Systolic Adaptive Beamforming

197

[S.29, 30]. It has the advantage of leading to a particularly simple processor architecture and can be implemented using the top row or rows of a triangular QR decomposition array. However, it is not the only way of mapping the linearly constrained least-squares problem onto a canonical least-squares processor (for an alternative, though less efficient systolic constraint pre-processor, see Kalson and Yao [S.31]), and some other useful techniques will now be discussed. (A detailed review of recent work in this field has been presented by

Tseng and Griffiths [S.32]).

The Generalized Sidelobe Canceller concept is illustrated schematically in Fig. S.14. The objective, as before, is to minimize the norm of the residual vector e(n) in (S.12S) subject to N linear equality constraints as defined by (S.141). The

weight vector W is treated as the sum of two components

 

 

(S.164)

where We is a vector which satisfies the constraint

 

CWe=m

(5.16S)

and Wh is a homogeneous component for which

 

 

(S.166)

This ensures that the weight vector W also satisfies the constraint equation. Now since Wh lies in the null space of C, it can be expressed as a linear combination of the form

(S.167)

where A is a p x (p - N) matrix (referred to as the "blocking" matrix) whose columns span the null space of C, and ~ is an arbitrary (p - N )-element vector. From the definition, it is clear that

CA=O.

(5.168)

!!c

y' (n) - ! (n) !!c

e(n)

...

!!

CANONICAL ADAPTIVE

COMBINER

Fig. 5.14. Generalized Sidelobe Canceller (GSLC)

198 T.J. Shepherd and J.G. McWhirter

The residual vector may thus be expressed in the form

e(n) = X(n)[we + AltI] =y'(n) + X'(n)ltI,

(5.169)

where

 

y'(n) = X(n) We

(5.170)

and

X'(n) = X(n)A ,

(5.171)

and so the constrained least-squares problem corresponds to minimizing Ily'(n) + X'(n)ltIll with respect to the vector ltI. Since this vector is entirely arbitrary, the minimization may be carried out using a canonical least-squares processor of order p - N + 1 together with a suitable pre-processor to perform the input data transformation in (5.170) and (5.171).

In Sect. 5.7.1, where we considered the case of a single constraint, the vector We was effectively chosen to be

wI = [fllc!, 0, 0, ... 0]

(5.172)

which clearly satisfies (5.124). Assuming an array of omni-directional antennae, it follows that the quiescent beam shape (corresponding to ltI = 0) is simply that of the first omni-directional element. While this is ideal for many communications applications, it would not be suitable for a radar antenna. However, we could have chosen the vector

(5.173)

which also satisfies (5.124) and leads to a quiescent beam with maximum gain in the look direction as specified by c. This is a very simple but important generalization which only requires the input to the right-hand column of cells in Fig. 5.12 to be modified.

In Sect. 5.7.1 the blocking matrix was effectively chosen to be of the form

[

-'{Ie,

 

0

0

0

-co/e'l

 

 

 

 

-

c3

/c!

 

 

 

 

A=

0

 

1

0

0

o

,

(5.174)

 

0

 

0

0

0

1

 

 

where the {ci } are the elements of the constraint vector c. This clearly satisfies the (single constraint) equation

(5.175)

and corresponds to the simple pre-processor defined in Fig. 5.12. However, the

5. Systolic Adaptive Beamforming

199

choice of blocking matrix is not unique and many other forms exist. For example, assuming that Ci '# 0 (i = 1,2, ... , p - 1), we could have chosen the matrix

 

-c2 /c1

0

0

0

 

 

1

-c3 /cz

0

0

 

A=

0

1

-c4 1c3

0

(5.176)

 

 

 

 

 

 

 

-

Cplcp_1

 

 

0

0

0

1

 

which also satisfies (5.175) and leads to the type of pre-processor shown in Fig. 5.15. The simplicity of this pre-processor, like that in Fig. 5.12, is due to the choice of a very sparse blocking matrix. In general, of course, a p x (p - N) array of processing cells may be required to perform the appropriate data transformation.

In Sect. 5.7.2, where we considered the case of multiple constraints, the vector We was effectively chosen to be

We = [T-01 m'] '

(5.177)

x

z

x/c

x-cz

Fig. 5.15. Alternative single-constraint pre-processor

200 T.J. Shepherd and J.G. McWhirter

and since

CWc=Q~[T,V] [T-0lm'J =m

(5.178)

it is clear that (5.165) has been satisfied. An alternative choice would be the vector

(5.179)

which not only satisfies (5.165) but also leads to a quiescent beam profile more suitable for radar applications.

The blocking matrix adopted in Sect. 5.7.2 is given by

 

A = [Ip~N ] - [ ~JT- 1 V,

(5.180)

where IN denotes the N x N unit matrix and IN- P is similarly defined. It is easy to show that the matrix A satisfies (5.168) and so the type ofleast-squares processor depicted in Fig. 5.13 clearly constitutes a particular form of Generalized Sidelobe Canceller.

5.8 Minimum Variance Distortionless Response Beamforming

In this section we describe a systolic array which can efficiently compute the Minimum Variance Distortionless Response (MVDR) from an array of p antenna receiver elements. (See, for example, the review by Owsley [5.33]). The MVDR beamforming problem.amounts to minimizing, in a least-squares sense, the combined output from an antenna array subject to L independent linear equality constraints each of which corresponds to a chosen look direction. The constraints are independent in the sense that, for each new vector of received data samples, it is necessary to compute the minimum array output subject to each constraint in turn. This involves the solution of L independent, but closely related, least-squares minimization problems.

In Sect. 5.7 we showed how a canonical least-squares processor of order p + 1 could be used to perform a p-channel recursive least-squares minimization subject to one or more simultaneous linear equality constraints. Assuming that the leading N x N submatrix of the constraint matrix C is non-singular, the top row or rows (one for each simultaneous constraint) of the triangular array are used to perform a constraint pre-processing operation. The remainder of the triangular array is used to perform a QR decomposition on the transformed data matrix produced by the constraint pre-processor section. The number of arithmetic operations performed by this array is'O[(p + W] per sample time. Unfortunately, this type of systolic array is inefficient for computing the MVDR