Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / (Springer Series in Information Sciences 25) S. Haykin, J. Litva, T. J. Shepherd (auth.), Professor Simon Haykin, Dr. John Litva, Dr. Terence J. Shepherd (eds.)-Radar Array Processing-Springer-Verlag

.pdf
Скачиваний:
68
Добавлен:
27.03.2016
Размер:
14.79 Mб
Скачать

252T.V. Ho and J. Litva

6.2Adaptive Beamforming Techniques

6.2.1 Introduction

A typica11D adaptive antenna is illustrated in Fig. 6.1. Essentially, it is an array, consisting of M + 1 antenna elements. The first M elements are called auxiliary elements and the signals at their ports are denoted by Xl (n), X2(n), ... ,xM(n). Prior to summation, the outputs of these elements are multiplied or, in other words, weighted by the parameters Wi (n), W2(n), ... , wM(n). The last element of the array is used as a reference antenna. For the purpose of formulating the problem, it is called a primary element and is denoted by y(n). The statement of the problem is as follows: Given a data matrix X(n) and a primary vector y(n), find the tap weight vector w(n) which minimizes the combined output signal e(n), where

e(n) = y(n) - X(n)w(n)

 

(6.1)

with

 

 

 

 

 

e(n) = [e(I), e(2), ... ,e(n)]T ,

 

(6.2a)

y(n) = [y(I), y(2), ... ,y(n)]T ,

 

(6.2b)

 

Xl(1)

x2(1)

... xm(l)

... XM(I)]

 

 

[ Xl (2)

x2(2)

... xm(2)

... xM(2)

(6.2c)

X(n) =

..

.

. '

 

.

 

 

Xl (n)

X2(n)

... Xm(n)

... XM(n)

 

Fig. 6.1. Narrow band adaptive antenna

6. Two-Dimensional Adaptive Beamforming

253

and

(6.2d)

Note that in (6.2c) each row of the data matrix X(n) constitutes a single snapshot xT(n), i.e.,

(6.3)

Hence, the data matrix X(n) can be written as

XT(n) = [x(1), x(2), ... ,x(n)] .

(6.4)

It is assumed that xm(n) is the narr.ow band signal which is received at array element m at time tn. Within the receiver pass band the signal takes the form

xm(n) = Ad ej(m-1)'P(lId)+t/ldl + IK Ak ej[(m-1)'P(lIk)+t/lkl + vm(n) ,

(6.5)

k=l

 

where <P(Oi) is the phase difference between adjacent elements given by

with d being the inter-element spacing and A. the antenna wavelength. Also in (6.5), the other variables are defined as follows:

Od-direction of angle-of-arrival (AOA) of the desired signal, Ok-direction of arrival of the kth interfering source,

Adamplitude of the desired signal,

Ak - amplitude of the kth interfering source,

vm(n)-receiver noise assumed to be Gaussian; with zero mean, and variance u2, and

ifJd' ifJk-uniformly distributed random variables with probability density

p(ifJ;) = {1/2n: 0:::;; ifJi :::;; 2n

(6.6)

o : elsewhere.

 

The objective of adaptive beamforming, as expressed by (6.1), is to minimize the combined output signal, which includes interference and thermal noise, while maintaining a constant gain in the direction of the desired signal. In general, this leads to the maximization of the Signal-to-Interference-and-Noise Ratio (SINR). The desired signal is usually assumed to be both weaker than, and uncorrelated with, the unwanted signal. The first assumption is usually valid in radar applications. This is especially true for those interferers that are generated by hostile or intelligent jammers. This follows from the fact that, in terms of range, radar returns follow the inverse fourth power law, whereas interfering signals follow the inverse second power law.

254 T. V. H0 and J. Litva

Although, in general, the assumption is made that the desired and unwanted signals are uncorrelated; this is not necessarily always the case. H the desired signal and the interferers are correlated, the adaptive beamformer not only fails to form deep nulls in the directions of the coherent interferers, but also partially or completely cancels the desired signal. This problem is particularly relevant in the case where the interferer is a multipath signal, and therefore correlated with the wanted signal. There are a number ofmethods to deal with fully or partially correlated sources [6.11-13]. The spatial smoothing method [6.11] has acquired some prominence among these techniques. It has been shown that this technique can reduce the degree of correlation between the signals, thereby improving the performance of the adaptive beamformer.

Since the pioneering work of Howells [6.14], Applebaum [6.15], and Widrow et al. [6.16], several adaptive algorithms have been proposed to improve and enhance the performance of adaptive beamformers [6.6-8, 17-21]. In general, they are referred to as classical adaptive beamforming techniques. Furthermore, they can be divided into two distinct categories, known as closed-loop and openloop techniques. The algorithms that are included in the closed-loop category are: (1) the control-law adaptive algorithm [6.15], (2) the LMS algorithm [6.16], and (3) the modified LMS algorithm [6.8,17-19]. Open-loop adaptive beamforming is usually implemented by using either the sample matrix inversion (SMI) algorithm [6.20], or some modifications of the SMI algorithm [6.6,21]. Adaptive beamforming algorithms that incorporate spectral estimation and, in particular, superresolution [6.7] also fall into this category. A number of good reviews of the adaptive algorithms and related issues can be found in [6.1,2,22-27].

There has been a considerable amount of discussion of adaptive beamforming techniques that are carried out in beamspace [6.9,10]. These expositions are typified by the work on partial adaptivity of large arrays by Chapman [6.9], and the adaptive-adaptive beamforming method by Brookner et al. [6.10]. The retrodirective eigenvector beam concept introduced by Gabriel [6.28] is a useful tool and gives a valuable insight into the fundamental principles of beam-space processing. One of the advantages of the beamspace method is that degrees-of-freedom required by the array to achieve adaptivity is proportional to the number of unwanted signals, rather than to the number of antenna elements.

Finally, we are currently watching the emergence of algorithms which are classified as modern adaptive beamforming techniques. In the main, they use the QRD-LS algorithm and digital processors. The digital processors usually take the form of systolic arrays [6.29, 30]. These are further classified as belonging to the group which are referred to as open-loop techniques. The use of systolic array processors results in increased computational throughput and less memory than would be the case with conventional digital processors. It has been shown that the algorithms converge rapidly, and that such processors are capable of real-time performance for bandwidths of 10 MHz, if the number of elements is six or less [6.3]. The advantages ofdigital adaptive beamformers are

6. Two-Dimensional Adaptive Beamforming

255

the same as those of digital filters, i.e., implementation in software rather than hardware, stability with time and temperature, increased dynamic range, improved precision, etc. As well, they permit the incorporation of superresolution techniques-for nulling out interferers in those cases where the desired and unwanted signals are separated by less than the antenna beamwidth.

6.2.2 Classical Adaptive Beamforming

a) LMS Algorithm

Typical of this class of algorithms are the LMS algorithm and the Howells-Applebaum algorithm. In general, the key components of a closedloop adaptive system are illustrated in Fig. 6.2. The weight vector w(n) is updated at each snapshot by an adaptive processor that responds to the output signal e(n). The derivation of e(n) is based on the steepest descent method. That is, changes in the weight vector are made along the direction of the estimated gradient vector. Accordingly, we may write

w(n + 1) = w(n) - ~ V(n) ,

(6.7)

where V(n) is the gradient vector ofthe array output with respect to the weight vector w(n). It is defined by

o

J}

= -2E[x*(n)e(n)] ,

(6.8)

V(n) = ow* {E[le(n)1 2

array of elements

11 primary channel

E

auxiliary channels

beam pattern

controller

feedback signal

Fig. 6.2. Closed-loop adaptive system

256 T.V. Ho and J. Litva

where

e(n) = y(n) - xT(n)w(n)

(6.9)

and p, is the update gain factor or step size parameter. In general, p, must be chosen to lie in the range

o~ p, ~ 2/Tr[cP]

(6.10)

to ensure the stability of the LMS algorithm. In (6.10), cP is the covariance matrix of the data vector; it is given by

cP = E[x*(n)xT(n)] .

(6.11)

Substituting (6.8,9, 11) into (6.7) yields

 

w(n + 1) + (p,cP -l)w(n) = p,p ,

(6.12)

where p is the correlation vector between the primary signal y(n) and the snapshot vector xT(n), and is defined by

P = E[y(n)x*(n)] .

(6.13)

Thus, the optimum weight vector of (6.12) denoted by wo, which gives the least-mean-squares error (V(n) ~ 0, w(n + 1) ~ w(n», satisfies the Wiener-Hopf equation

(6.14)

where cP -1 is the inverse of the covariance matrix cPo

b) Howells-Applebaum Algorithm

In the Howells-Applebaum algorithm, the SINR is maximized by means of cross-correlation filters (Howells-Applebaum loops). A typical arrangement based on the Howells-Applebaum loop is given in Fig. 6.3. Correlation loops can be implemented using mixers, high gain amplifiers, and narrow band filters, or, alternatively, they can be implemented using digital processors. In the latter case, the algorithm can be formulated in the same manner as the LMS algorithm. The tap weight update is given by

w(n + 1) = w (n) - g. Vs(n) ,

(6.15)

where it follows from Fig. 6.3 that

 

Vs(n) = -2IXS*(Od) + 2x*(n)es(n) .

(6.16)

Central to the technique is the vector

 

 

(6.17)

6. Two-Dimensional Adaptive Beamforming

257

signals from other elements

Fig. 6.3. Correlation loop of Howells-Applebaum algorithm

This is called the steering vector and is used to set up a shaped, received beam which is pointed in the desired signal direction 0d' Furthermore, in (6.16), the quantity IX is chosen so as to keep the gain of the receiver beam constant. The error signal es(n) is given by

 

(6.18)

With the introduction of the covariance matrix tfJ, (6.15) becomes

 

w(n + 1) + (JltfJ - l)w(n) = JlIXS*(Od) .

(6.19)

The optimum weight vector follows directly from (6.19) and is given by

 

WO = IXtfJ-1S*(Od)'

(6.20)

Clearly, the Howells-Applebaum algorithm has the same form as the LMS algorithm except for the steering vector, IXS*(Od)' being used in place of the correlation vector, p. This difference is due to the fact that in the Howells-Applebaum beamformer, the direction of desired signal is known in advance, which is not the case for the LMS beamformer. With the Howells-Applebaum algorithm, it is assumed that the desired signal is not contained in the data processed by the beamformer, therefore, one is not able to use a reference signal y(n) in formulating (6.18), as in the case of the LMS algorithm. It should be noted that the reference signal used in the LMS algorithm, which is often referred to as the desired signal, need not be a perfect replica of the desired signal; it is only necessary that it correlate with the desired signal. Several techniques have been proposed in the literature for generating the reference signals that are used with the LMS algorithm. However, this subject is

258 T.V. Ho and J. Litva

beyond the scope of this chapter. Interested readers are referred to papers by

Compton [6.31] and Winters [6.32].

c) Eigenbeam Concept

Eigenvector beams are useful for providing insight into the interference nulling performance of adaptive beamforming techniques. They are formed by applying an orthonormal transformation to the covariance matrix of data signals [6.28]. The orthonormal transformation can be carried out using the singular value decomposition (SVD) as follows:

The SVD of the covariance matrix tp is given by

(6.21)

where

(6.22)

is an unitary matrix of M columns which are eigenvectors of the covariance matrix tp. The first K eigenvectors (u1 , U2 , ••• ,UK)' called the principal eigenvectors, constitute the signal subspace, and the last M - K eigenvectors (UK+l' UK+2, ••• ,UM) constitute the noise subspace. 1: is a diagonal matrix of dimensions M x M, which has the form

(6.23)

with

where Ak is the eigenvalue corresponding to the kth source. The matrix tp may then be written as

(6.24)

It follows that

(6.25)

Equation (6.20) now becomes

(6.26)

6. Two-Dimensional Adaptive Beamforming

259

with

Yk = UrS*(Od) .

The first term on the right-hand side of (6.26) is the quiescent weight vector (unadapted) of the antenna array, and the second term represents the sum ofthe eigenbeam vectors, which are subtracted from the first term to obtain the adapted weight vector woo The magnitudes ofthe eigenbeams are such that when summed together and subtracted from the quiescent beam the K interferers are nulled. Also, the gain factor, a, must be adjusted to keep the main antenna beam constant at a certain value above the noise level (given by (12). As the noise power increases, the eigenbeams will be distorted resulting in shallow nulls at the jammer locations, which subsequently lead to a decrease in the SINR. Equation (6.26) can be adapted for use in conjunction with the LMS algorithm by substituting the correlation vector p in place of (XS*(Od)'

d) Sample Matrix Inversion Algorithm

The major problem associated with an adaptive beamformer based on the gradient descent method, which is true of the LMS algorithm and the controlloop method of the Howells-Applebaum algorithm, is that under certain external signal environments the convergence is poor. This is often characterized by the eigenvalue spread (X = A.maxlA.min) ofthe covariance matrix <P, and also by the choice of step size parameter J1.. In recent years, the least-squares minimization method, a direct solution of(6.1), has been proposed to improve the rate of convergence. This is known as the sample matrix inversion (SMI) algorithm [6.20J. The minimum norm of the output signal of (6.1) is given by

II e(n) 112 = Ily(n) - X(n)w(n) 112 .

(6.27)

Its gradient is of the form

 

J7lle(n)11 2 = 2[XH(n)y(n) - XH(n)X(n)w(n)] ,

(6.28)

where XH(n) denotes the Hermitian transpose of the data matrix X(n). And the least-squares solution is given by

w(n) = «P(n)-lp(n) ,

(6.29)

where «P(n)-l denotes the inverse of the maximum likelihood estimate of the covariance matrix

«P(n) = XH(n)X(n)

(6.30a)

and

p(n) = XH(n)y(n) .

(6.30b)

260 T.V. Ho and J. Litva

Equation (6.30b) gives the estimate of the vector ofcorrelations between the data matrix X(n) and the reference signal vector y(n).

Note that (6.29) is similar to (6.14) except for the fact that estimates of the covariance matrix and the correlation vector are used in place of their true forms. It has been shown in [6.20J that the 8MI algorithm converges to within 3 dB of the steady state in 2M iterations, i.e., with the use of approximately 2M data samples (snapshots), where M is the number of array elements. In many applications, because of the fast convergence feature of the 8MI algorithm, it is much preferred over the slower closed-loop algorithms [6.33J. Usually, these applications are ones where the external signal scenario undergoes rapid changes. In these instances the closed-loop techniques are too slow to" be effective.

A schematic diagram of an adaptive beamformer which is based on the SMI algorithm is given in Fig. 6.4. The adaptive processor has three key components:

(1) an estimate covariance processor to form and store the covariance matrix estimate, (2) a high-speed arithmetic processor to solve the matrix operation defined by (6.29), and (3) a weight-combining processor to apply the weight vector to a digital beamformer. Obviously, this type of architecture is complicated and difficult to design and, in general, not suitable for VLSI implementation. The second problem associated with the SMI algorithm is its numerical instability for signal space conditions, as revealed by the condition number of the matrix cP(n) in (6.30a), which is

c{cP(n)} = C {XiI(n)X(n)} = [PmaxlPminJ2

(6.31)

Pmax and Pmin are the largest and the smallest singular values of the data matrix X(n), respectively. Equation (6.31) indicates that the condition number of the estimated covariance matrix cP(n) is much greater than that of the data matrix X(n). Since there is a direct linkage between condition number and the stability ofthe solution to a system oflinear equations, an algorithm that avoids forming

array of elements

primary channel

y

e

Fig. 6.4. Open-loop adaptive system

6. Two-Dimensional Adaptive Beamforming

261

the covariance matrix explicitly, and which operates directly on the data matrix, is likely to have much better numerical stability than one based on a covariance matrix.

Another problem associated with the SMI is the computational load on the beam controller. The formation of the sample matrix and the solution of the optimum weights require, respectively, about M3 and M 3 /6 complex multiplications, leading to a total of 7M 3/6 complex operations [6.20]. With an M3 dependence, it is clear that for real-time application there would be a strong need to explore parallel processing techniques to maintain the solution throughput rate implied by current system demands.

The above problems can be overcome by carrying out the least-squares minimization using the QR decomposition technique. The QRD-LS algorithm and its systolic array implementation are discussed in the next section.

6.2.3 Modern Adaptive Beamforming

a) QRD Least-Squares Algorithm

The major problem underlining the performance of the closed-loop algorithms, typified by the LMS algorithm, is one of poor conv\!rgence. In comparison, the SMI algorithm is cumbersome and numerically unstable. An alternative approach to the least-squares minimization problem, which is particularly good in the numerical sense, is that of orthogonal triangularization [6.34]. This is typified by the method known as QR decomposition (QRD) which involves the application of a sequence of unitary transformations to reduce the measured data matrix to an upper triangular form. Recently, with the growth of VLSI technology, workers have proposed using systolic arrays to solve the recursive least-squares problem, which is based on the QRD algorithm [6.28, 29]. In particular, the systolic array represents a highly pipelined and parallel architecture. Thus, an adaptive beamformer configured with a systolic array is potentially capable of performing real-time processing. Systolic array processors have shown much promise in adaptive beamforming applications [6.3-5].

Equation (6.1) may be implemented by using the QRD algorithm as follows: If we premultiply both sides of (6.1) with a composite unitary transformation Q(n), we obtain

Q(n)e(n) = Q(n)y(n) - Q(n)X(n)w(n)

(6.32a)

 

(6.32b)

where Q(n) is an n x n matrix, u(n), v(n) are column vectors of dimensions M xl and (n - M) x 1, respectively. R(n) is an M x M upper triangular matrix, and 0 is an (n - M) x M null matrix. Note that multiplication by Q(n) in (6.32a) does