Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / (Springer Series in Information Sciences 25) S. Haykin, J. Litva, T. J. Shepherd (auth.), Professor Simon Haykin, Dr. John Litva, Dr. Terence J. Shepherd (eds.)-Radar Array Processing-Springer-Verlag

.pdf
Скачиваний:
68
Добавлен:
27.03.2016
Размер:
14.79 Mб
Скачать

140 B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai

100 deg 80

60

40

20

0

-20

-40

-60

-80

-100 -100

100 deg 80

60

Domain of Attraction.. for SML

....·...

...·

1:..JlX·

JIll ....

....::::JI..... Il

Successful starting points: x

Global minimum: •

Nurrber of sensors: 8

-50

0

50

deg

Domain of Attraction for DML

 

100

Domain 01 Attraction lor SML

 

 

 

·

 

 

60

 

 

 

deg 80

 

..

 

 

40

 

 

 

 

20

 

 

 

 

0

 

 

 

 

-20

 

 

 

 

-40

 

Successful slarting points: x

 

 

 

 

 

 

-60

 

Global minimum: •

 

 

 

 

 

 

-60

 

NUrT'ber 01 sensors: 10

 

 

-100

-50

 

 

100

-100

50

100

deg

 

..

100

Domain of AllrQctlon for DML

deg 80

 

60

 

40

20

0

........

...:::1I

40

...

20

....

0

: -' II !:a

 

-20

 

 

 

 

-20

 

 

 

 

-40

 

Successful starting points: x

 

-40

 

Sucoessful starting points: x

 

 

 

 

 

 

 

-60

 

Global minimum: •

 

 

-60

 

Global mlnimJm: •

 

 

-80

 

Nurrber of sensors:

8

 

-80

 

Nurrber of sensors: 10

 

-100

-50

0

50

100

-100

-50

o

50

100

-100

-100

 

 

 

deg

 

 

 

 

deg

 

100 deg 80

60

40

20

0 -20

-40 -60

-80

-100 -100

 

 

 

 

Domain 01 Attraction lor WSF

 

 

 

 

 

. ...

 

 

 

.. ·· ...

··..·......

 

 

 

 

..

 

..

 

 

·

 

 

 

 

 

 

 

 

 

· .

 

 

 

 

 

 

 

 

 

..

 

 

 

 

·

 

·.

 

..

..·

 

 

 

 

·

 

 

 

. ..

 

 

 

...

 

·

.

·

••

IXXI:...

·.

·

 

JI·

.

 

·

 

 

··.

..

 

 

)I::::x

 

 

 

·.., .....

 

 

 

 

·.

..

 

 

 

 

 

 

 

.

...·

 

 

 

 

 

 

·

 

 

 

··

Successful starting points:

 

 

 

Global minimum: •

·

 

 

 

Nurrber of sensors: B

 

 

 

-50

 

 

0

 

 

50

.

deg 80

 

 

.

Domain of Attraction for WSF

....

 

 

 

.,

 

.

100

 

·

 

 

 

 

 

 

 

··..

 

 

· ...

...

 

60 ..

 

 

 

 

·

 

 

·

 

 

 

 

 

 

 

40

 

 

 

 

 

JlXxx.·.r...

 

 

 

20 .. ..

 

...·

 

 

 

0

 

·

.

 

 

 

 

 

 

 

 

·JI!PJl

 

 

 

-20

 

 

 

 

 

 

 

 

x

-40

 

 

 

 

Successful starting points: x

 

·Xl"a

 

 

 

-60

 

 

Global minimum: •

 

 

 

 

 

 

 

 

 

 

-80

··

 

 

 

Nurrber of sensors: 10

 

-100

 

 

 

 

 

o

 

 

100

-100

 

 

-50

 

50

100

deg

 

 

 

 

 

 

 

 

deg

Fig. 4.5. Starting values that lead to the global minimum for the scoring technique applied tp the SML, DML, and WSF methods. The horizontal axis represents 01 , and the vertical axis is O2

4. Maximum Likelihood Techniques for Parameter Estimation

141

more detail. The alternating projection initialization scheme suggested in [4.68J is used for initializing the scoring methods described in Sect. 4.6. The original

version is used for the SML and DML methods, whereas the same scheme with Rreplaced by Es ~PtE~is used for the WSF technique. The overall performance is assessed by displaying the probability of reaching the global minimum (Fig. 4.6a), and the average number ofiterations necessary to fulfill the stopping condition (Fig.4.6b), versus ()2' The iterations terminated successfully when IB- 1 V'I < 0.01° in the vast majority of the trials (more than 96% for ()2 ~ 3°). In a small number of trials in the threshold region, the maximum number of iterations (set to 30) was reached or the search terminated at a point where no improvement could be found along the scoring direction (Ilk < 0.0001). From Fig. 4.6a, it is clear that the AP initialization scheme has excellent performance for the WSF method-the global minimum was reached in all trials. However, a large number of iterations is necessary in the difficult cases where the angle separation is less than 3°. The AP initialization also performs satisfactorily for the DML method, and somewhat worse, but still acceptably for the SML technique. Recall that the RMS error for 82 is more than half the angle separation when ()2 < 4S. Notice also that for ()2 > 4°, the WSF scoring technique converges in a smaller number of iterations than the DML and SML scoring methods. Still, the DML scoring technique performs significantly better than the modified Gauss-Newton technique suggested in [4.64J. D

Example 4.4. Detection Performance In the final example, we compare the performance of the various detection procedures outlined in Sect. 4.6. The scenario is the same as in the previous examples. The estimates are calculated as in Example 4.3. Figure 4.7 displays the probability of correct detection versus

1.1.---.........-....--.--..--........-..----.----,

11.------.--..----.--..---.--..---.----,

Proo

a

 

 

 

 

 

 

• Iter

,

b

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10 \

 

 

 

 

WSF: _...

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

9

\

 

 

 

SML: -

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DML: ..--

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

B

\

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

7

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

 

WSF: ...-..

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DML: .._-

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SML: -

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

0.52

3

4

5

6

7

B

9

10

12

3

4

5

6

7

B

9

10

 

 

 

 

 

 

 

 

deg

 

 

 

 

 

 

 

 

deg

Fig.4.6. Probability of global convergence a, and average number of iterations b versus O2 for the SML, DML, and WSF methods

142 B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai

0.9

a

/"·..··..·_··,,··_···..····7'·"···..····

 

0.9

b

 

 

/

 

 

Prob

 

 

 

 

 

 

 

Prob

 

 

 

 

 

 

0.8

/

 

 

 

1

 

 

0.8

!i/

 

 

i!

 

 

/

 

 

 

./

 

 

 

 

 

 

0.7

 

 

 

 

 

0.7

j

 

 

Ii

 

 

 

/

 

 

 

i

 

 

 

j

 

ji

 

 

0.6

 

 

 

 

 

 

0.6

JI

 

 

 

 

 

 

!

 

 

 

 

 

 

 

0.5

t

 

 

 

 

0.5

 

J

 

 

 

 

 

 

 

 

 

 

 

j

 

 

 

0.4

 

 

 

i

 

 

 

 

 

I

 

 

 

 

 

 

 

 

 

0.4

j

 

 

 

 

 

 

 

I

WSF. 95%: .._..

 

WSF.99% : ..."".

 

 

 

 

 

f

 

 

 

 

 

GLRT.95% : -

 

 

i

GLRT,99%:-

0.3

)

 

 

0.3

f

 

I

 

 

Ii

MOLC:

 

 

MOLC:

 

 

 

 

 

 

0.2

 

 

 

 

 

 

)

i

 

 

 

 

.~...j

)

i

 

 

 

 

 

 

 

 

 

 

0.1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3

4

5

6

7

8

9 deg 10

 

4

5

6

7

8

9 deg10

Fig.4.7. Probability of correct detection using 95% a and 99% b confidence levels, versus 112 for the GLRT and WSF detection schemes. The results for the MDLC technique is the same in both plots, since it requires no threshold setting

the location of the second source for the different detection schemes, The detection results from the MDLC method [4.89] are included for comparison. This scheme is based on the DML cost function, and relies on the DML estimates. The WSF detection scheme requires information of the signal subspace dimension. The MDL test [4.79] is used for this purpose, which results in the value d' = 1 in all trials. As seen from Fig. 4.7, the WSF detection scheme has a slightly lower threshold than the GLRT based technique, whereas the MDLC criterion clearly performs more poorly. Note that as (J2 increases, the detection probabilities for the WSF and GLR tests approach the value preset by the threshold. Another important observation is that the threshold value of the angle separation for the WSF and GLRT detection schemes (about 4° for WSF and 4.so for GLRT) is lower than the value at which reliable estimates is obtained (e.g., when the standard deviation of both angle estimates is lower than half the angle separation), cf. Fig. 4.3b. On the other hand, the value at which the global minimum can be reached using the AP initialization scheme, is still lower for the WSF algorithm (less than 2°), whereas it is similar to the detection threshold for the SML method and the GLR test, cf. Fig. 4.6a. One may suspect that the initialization procedure is indeed the limiting factor for the GLR test. This is, however, not the case, since for (J2 ::;; 3.so essentially all errors are due to underestimation of d, so that detection performance cannot be improved upon by using a more efficient initialization procedure.

In summary, this observation suggests that the estimation accuracy is indeed the limiting factor for both the WSF and the SML methods, if the proposed schemes for calculating the estimates and determining the number ofsources are used. However, the estimation accuracy is the best possible in terms of estimation error variance. Of course, no general conclusions can be drawn from the

4. Maximum Likelihood Techniques for Parameter Estimation

143

rather limited scenario presented in this study, but similar observations have

been made also in other cases [4.64J.

0

4.9 Conclusions

This chapter presents exact and large sample realizations of ML estimation techniques for signal parameter estimation from array data. Two different signal models (deterministic vs. stochastic) are considered, and the ML estimator is formulated for each model. The asymptotic properties of these methods are examined and related to the corresponding Cramer-Rao lower bounds. The stochastic ML (SML) method is found to provide more accurate estimates than the deterministic ML (DML) technique, independent ofthe actual emitter signal waveforms. Under the Gaussian signal assumption, the SML method is asymptotically efficient, i.e., the estimation error attains the CRLB.

Two classes of multidimensional subspace or eigenstructure techniques are presented: the noise and signal subspace formulations. The relation between these is established by comparing their asymptotic distributions. The subspace methods that are asymptotically identical to the ML techniques are identified. In the case of coherent signals (specular multipath), only the signal subspace version can yield optimal (SML) performance. This optimal signal subspace method is termed WSF (Weighted Subspace Fitting).

All methods considered herein require a multidimensional search to find the parameter estimates. Newton-type descent algorithms for calculating the ML and signal subspace estimates are presented, and their implementation and initialization are discussed. The Newton-type algorithm for the signal subspace formulation requires O(md 2 ) flops per iteration, whereas both ML implementations cost O(m2d). Consequently, the subspace technique offers a computational advantage when the number of emitters (d) is less than the number of sensors (m).

A generalized likelihood ratio test (GLRT) for determining the number of emitters is derived, as well as a detection scheme based on the WSF criterion. The methods are similar in structure, and require a threshold which sets the (asymptotic) probability of detection.

Numerical examples and simulations are presented, comparing the various detection and estimation algorithms. Examples studying empirical versus theoretical estimation accuracy, global convergence properties, average number of iterations, and detection performance are included. The performance of the WSF and SML-GLRT techniques is found to be limited only by the estimation accuracy. In other words, global convergence and correct detection is obtained with high probability whenever accurate parameter estimation is possible. In these cases, the estimation error variance is found to be in good agreement with the theoretical variance, based on the asymptotic covariance matrix.

144 B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai

Appendix 4.A Differentiation of the Projection Matrix

In this appendix, the expressions for the first and second derivatives of the projection matrix PA «(}) = A(AHA)-l AH = AAt, with respect to the elements in (}, are derived. These results can also be found in [4.54]. For ease of notation we write P instead of PA «(}).

Consider the first derivative

 

oP

 

t

t

 

(4.A.1)

P·=-=A·A

 

+AA.

I

o(};

I

 

I

 

After some algebraic manipulations the following is obtained for the pseudoinverse

Ai = (AHA)-l Arp.L - AtA;At .

(4.A.2)

Combining (4.A.1) and (4.A.2) gives

 

P; = P.LA;At + (- ..)H,

(4.A.3)

where the notation ( ... )H means that the same expression appears again with complex-conjugate and transpose. From (4.A.3) it is easily verified that Tr{P;} = 0 as expected, since the trace of a projection matrix depends only on the dimension of the subspace onto which it projects.

The second derivative is given by

p,. = P-!-A.At + P.LA ..At + P.LA.At + (...)H

(4.A.4)

IJ

J I

IJ

I J

Using (4.A.2) and Pt = -Pj gives

Pij = -P.LAjAtA;At - A+HAfp.LA;At + P.LA;jAt +

+ P.LA;(A HA)-l AfP.L -

P.LA;AtAjAt + (···f.

(4.A.5)

 

Appendix 4.B Asymptotic Distribution

of the Weighted Subspace Fitting Criterion

A proof of Theorem 4.8 is provided in this appendix. The distribution of V«(}o) is based on the asymptotic distribution of the signal eigenvectors of the sample covariance. From the asymptotic theory of principal components [4.91, 92] we have the following result:

Lemma 4.2 The d' largest eigenvectors of Rare asymptotically (for large N) normally distributed with means and covariances given by

u = Im{u}.
where "'k=(ik and
Since uis of order Op(N -1/2) for large N, we can replace the consistent estimates of 0'2 and A.k above with the true quantities, without changing the asymptotic properties. The assertion on the asymptotic distribution of VwsF(lIo) then follows from the fact that (4.B.9) is a sum of 2d'(m - d) squared random variables that are asymptotically independent, and N(O, 1).
-82)2/i1o u=[~uI ... y'w.;UJ.]T, u=Re{u},
(4.B.5)
Note that ZH Z = I, since P~ is an idempotent matrix. Clearly, Uk' k = 1, ... , d' have a limiting normal distribution. From (4.B.1-3), the first and second moments are given by
(4.B.6)
(4.B.7)
(4.B.8)
From (4.85), we can rewrite the normalized WSF cost function as
(4.B.9)
145
(4.B.l)
(4.B.2)
(4.B.3) D
(4.B.4)

4. Maximum Likelihood Techniques for Parameter Estimation

It is convenient to introduce a set of transformed variables

where Z is an m x (m - d) full rank, square-root factor of P~(lIo), i.e.,

146 B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai

To determine the distribution of the cost function evaluated at the parameter estimate, consider the Taylor series expansions

V(O) = V(Oo) + V'T(Oo)i + tiT V"(Oo)i + 0(10'12)

(4.B.10)

V'(O) = 0 = V'(Oo) + v"(oo)i + 0(10'1),

(4.B.11)

where the subscript has been dropped from V°WSF(0) for notational convenience. Here, 0'denotes the estimation error, 0' = 0- 0 • The regularity assumptions on the array response vectors guarantee that V"(Oo) converges (w.p.l) as N --+ 00. The limiting Hessian, denoted H, is assumed to be non-singular. Equations (4.B.10-11) can be written

V(O) = V(Oo) + V,T(Oo)i + tiTHi + 0(10'12),

(4.B.12)

V'(O) = 0 = V'(Oo) + Hi + 0(10'1).

(4.B.13)

The "in-probability" rate of convergence of the estimation error is Op(N -1 /2). Thus, inserting (4.B.13) into (4.B.12) yields

V(O) ~ V(Oo) - V'T(Oo)H- 1 V'(Oo) + t V,T(Oo)H- 1 V'(Oo)

(4.B.14)

where the approximation is of order op(N -1). Consider the second te!ID of (4.B.14). The ith component ofthe gradient is given in (4.93). Since P~(Oo)Es is of order O(N- 1/2 ), we can write

a~i V(Oo) =

-

2Re

tt~vrAtHArp~ek}+ op(N-

1

/2)

 

 

 

 

 

 

= -2Re{ f }~VrAtHArZUk}

 

 

 

 

 

 

 

 

 

k=1

 

 

 

 

 

 

 

 

 

= Re{Tiu}+ Op(N-1/2) ,

 

 

 

 

(4.B.15)

where

 

 

 

 

 

 

 

 

 

 

 

 

Ti

= [-2Ft~AtHArZ, ... , - 2jW;;e~AtHArZ].

(4.B.16)

Thus, V'(Oo) can be written as

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(4.B.17)

where

T = [TI ... T~]T,

T = Re(T),

and

T= Im(T). Using (4.B.9) and

(4.B.17), we can rewrite (4.B.14) as

[T -

T] ) [:] + op(N-1

(4:B.18)

V(O) =

 

T

 

 

]H- 1

 

 

["Ti

](1- ~[!;T

 

 

 

 

).

 

4. Maximum Likelihood Techniques for Parameter Estimation

147

The matrices T and H are related using (12 H = Q(cf., the proof ofCorollary 4.la) together with (4.B.l7)

(12H=

lim

NE{V'(Oo)V'(OO)T}

 

 

 

N .... oo

 

 

 

=

lim

NE(Tii - iii)(Tii - iii)T

 

 

N .... oo

 

 

 

= lim N (TE {iiiiT} TT - TE {iiiiT} iT

 

 

N .... oo

 

 

 

 

 

 

 

(4.B.19)

where we

have used E{iiiiT} = o(N- 1 ) and

E{iiiiT} = E{iiiiT} = (12j2N

+ O(N-l). Substitute (4.B.19) in (4.B.18) to obtain

V(O) =[iiTiiT](J- [ !iT}TfT + iiT)-l [T -

i])[:J + op(N- 1)

= [iiTiiT](J - x{:J+ op(N-

1 ).

(4.B.20)

 

 

We recognize the matrix X above as a pd-dimensional orthogonal projection in IR2d'(m-d). Thus, J - X is a [2d'(m - d) - pdJ-dimensional projection matrix. Hence, (2N j(Tl) VWSF(O) can be written as a sum of the square of 2d'(m - d) - pd random variables that are asymptotically independent, and N(O, 1). The assertion of the Theorem follows.

References

4.1S.P. Applebaum: Adaptive arrays. Technical Report SPL TR 66-1, Syracuse University Research Corporation (1966)

4.2B. Widrow, P.E. Mantey, L.J. Griffiths, B.B. Goode: Adaptive antenna systems. Proc. IEEE 55,

2143-2159 (1967)

4.3F.e. Schweppe: Sensor array data processing for multiple signal sources. IEEE Trans. IT-14, 294-305 (1968)

4.4V.H. MacDonald, P.M. Schultheiss: Optimum passive bearing estimation in a spatially incoherent noise environment. J. Acoust. Soc. Am., 46(1), 37-43 (1969)

4.5J. Capon: High resolution frequency wave number spectrum analysis. Proc. IEEE 57, 1408-1418 (1969)

4.6R.A. Monzingo, T.W. Miller: Introduction to Adaptive Arrays (Wiley, New York 1980)

4.7S. Haykin (ed): Array Signal Processing (Prentice-Hall, Englewood Cliffs, NJ 1985)

4.8B.D. Van Veen, K.M. Buckley: Beamfonning: A versatile approach to spatial illtering. IEEE ASSP Mag. pp. 4-24 (April 1988)

148 B. Ottersten, M. Viberg, e. Stoica, and A. Nehorai

4.9J.P. Burg: Maximum entropy spectral analysis, in Proc 37th Annual Int SEG. Meeting, Oklahoma City, OK (1967)

4.10D.W. Tufts, R. Kumaresan: Estimation of frequencies of multiple sinusoids: Making linear prediction perform like maximum likelihood. Proc. IEEE 70, 975--989 (1982)

4.11R. Kumaresan, D.W. Tufts: Estimating the angles of arrival of multiple plane waves. IEEE Trans. AES-19, 134-139 (1983)

4.12R.O. Schmidt: Multiple emitter location and signal parameter estimation, in Proc. RADC Spectrum Estimation Workshop, Rome, NY (1979) pp. 243-258

4.13G. Bienvenu, L. Kopp: Principle de la goniometrie passive adaptive, in Proc. 7'erne Colloque GRESIT, Nice, France (1979) pp. 106/1-10

4.14G. Bienvenu, L. Kopp: Adaptivity to background noise spatial coherence for high resolution passive methods, in Proc. IEEE ICASSP, Denver, CO (1980) pp. 307-310

4.15N.L. Owsley: Data adaptive orthonormalization, in Proc. ICASSP 78, Tulsa, OK (1978) pp.109-112

4.16V.F. Pisarenko: The retrieval of harmonics from a covariance function. Geophys. J. R. Astron. Soc. 33, 347-366 (1973)

4.17G. Bienvenu, L. Kopp: Optimality of high resolution array processing, IEEE Trans. ASSP-31, 1235--1248 (1983)

4.18A. Paulraj, R. Roy, T. Kailath: A subspace rotation approach to signal parameter estimation.

Proc. IEEE 74, 1044-1045 (1986)

.

4.19S.Y. Kung, c.K. Lo, R. Foka: A Toeplitz approximation approach to coherent source direction finding, in Proc. ICASSP 86, Tokyo (1986)

4.20SJ. Orfanidis: A reduced MUSIC algorithm, in Proc. IEEE ASSP 3rd Workshop Spectrum Est. Modeling, Boston, MA (1986) pp. 165--167

4.21W.J. Bangs: Array processing with generalized beamformers. Ph.D. Thesis, Yale University (1971)

4.22M. Wax: Detection and estimation ofsuperimposed signals. Ph.D. Thesis, Stanford University (1985)

4.23I.F. Bohme: Estimation of source parameters by maximum likelihood and non-linear regression, in Proc. ICASSP 84, San Diego, CA (1984) pp. 7.3.1-4

4.24I.F. Bohme: Estimation of spectral parameters of correlated signais in wavefields. Signal Processing 10, 329-337 (1986)

4.25P. Stoica, A. Nehorai: MUSIC, maximum likelihood and Cramer-Rao bound, in Proc. ICASSP 88 New York (1988) pp. 2296-2299

4.26B. Ottersten, L. Ljung: Asymptotic results for sensor array processing, in Proc. ICASSP 89, Glasgow, Scotland (1989) pp. 2266-2269

4.27B. Ottersten, M. Viberg: Analysis of subspace fitting based methods for sensor array processing, in Proc. ICASSP 89, Glasgow, Scotland (1989) pp. 2807-2810

4.28R.O. Schmidt: A signal subspace approach to multiple emitter location and spectral estimation. Ph.D. Thesis, Stanford University (1981)

4.29B. Ottersten, B. Wahlberg, M. Viberg, T. Kailath: Stochastic maximum likelihood estimation in sensor arrays by weighted subspace fitting, in Proc. 23rd Asilomar Conf. Sig., Syst., Comput., Monterey, CA (1989) pp. 599-603

4.30P. Stoica, A. Nehorai: MUSIC, maximum likelihood and Cramer-Rao bound. IEEE Trans. ASSP-37, 720-741 (1989)

4.31H. Clergeot, S. Tressens, A. Ouamri: Performance of high resolution frequencies estimation methods compared to the Cramer-Rao bounds. IEEE Trans. ASSP 37, 1703-1720 (1989)

4.32P. Stoica, A. Nehorai: Performance study ofconditional and unconditional direction-of-arrival estimation. IEEE Trans. ASSP-38, 1783-1795 (1990)

4.33K. Sharman, T.S. Durrani, M. Wax, T. Kailath: Asymptotic performance of eigenstructure spectral analysis methods, in Proc. ICASSP 84, San Diego, CA (1984) pp. 45.5.1-4

4.34DJ. Jeffries, D.R. Farrier: Asymptotic results for eigenvector methods. lEE Proc. F 132, 589-594 (1985)

4. Maximum Likelihood Techniques for Parameter Estimation

149

4.35M. Kaveh, A.J. Barabell: The statistical performance of the MUSIC and the minimum-norm algorithms in resolving plane waves in noise. IEEE Trans. ASSP-34, 331-341 (1986)

4.36B. Porat, B. Friedlander: Analysis of the asymptotic relative efficiency of the MUSIC algorithm. IEEE Trans. ASSP-36, 532-544 (1988)

4.37B.D. Rao, K.V.S. Hari: Performance analysis of root-music. IEEE Trans. ASSP-37, 1939-1949 (1989)

4.38B.D. Rao, K.V.S. Hari: Performance analysis of ESPRIT and TAM in determining the direction of arrival of plane waves in noise. IEEE Trans. ASSP-37, 1990-1995 (1989)

4.39P. Stoica, A. Nehorai: MUSIC, maximum likelihood, and Cramer-Rao bound: Further results and comparisons. IEEE Trans. ASSP-38, 2140-2150 (1990)

4.40P. Stoica, A. Nehorai: Performance comparison ofsubspace rotation and MUSIC methods for direction estimation. IEEE Trans. SP-39, 446-453 (1991)

4.41P. Stoica, K. Sharman: A novel eigenanalysis method for direction estimation. lEE Proc. part F, 19-26 (1990)

4.42P. Stoica, K. Sharman: Maximum likelihood methods for direction-of-arrival estimation. IEEE Trans. ASSP-38, 1132-1143 (1990)

4.43D. Starer: Algorithms for polynomial-based signal processing. Ph.D. Thesis, Yale University (1990)

4.44M. Viberg, B. Ottersten: Sensor array processing based on subspace fitting. IEEE Trans. SP-39, 1110-1121 (1991)

4.45B. Ottersten, M. Viberg, T. Kailath: Performance analysis of the total least squares ESPRIT algorithm. IEEE Trans. SP-39, 1122-1135 (1991)

4.46Y. Bresler: Maximum likelihood estimation of linearly structured covariance with application to antenna array processing, in Proc. 4th ASSP Workshop on Spectrum Estimation and Modeling, Minneapolis, MN (1988) pp. 172-175

4.47B. Ottersten, R. Roy, T. Kailath: Signal waveform estimation in sensor array processing, in Proc. 23rd Asilomar Conf. Sig., Syst., Comput., Pacific Grove, CA (1989) pp. 787-791

4.48M. Wax, I. Ziskind: On unique localization of multiple sources by passive sensor arrays. IEEE Trans. ASSP-37, 996-1000 (1989)

4.49A. Nehorai, D. Starer, P. Stoica: Direction-of-arrival estimation in applications with multipath and few snapshots. Circuits, Syst., Signal Proc. 10, 327-342 (1991)

4.50K.V. Mardia, J.T. Kent, J.M. Bibby: Multivariate Analysis (Academic, London 1979)

4.51T.W. Anderson: An Introduction to Multivariate Statistical Analysis, 2nd edn. (Wiley, New York 1984)

4.52N.R. Goodman: Statistical analysis based on a certain multivariate, complex gaussian distribution (an introduction). Ann. Math. Stat. 34,152-176 (1963)

4.53A.G. Jaffer: Maximum likelihood direction finding of stochastic sources: A separable solution, in Proc. ICASSP 88, New York (1988) pp. 2893-2896

4.54G. Golub, V. Pereyra: The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. SIAM J. Numer. Anal. 10, 413-432 (1973)

4.55S.D. Silvey: Statistical Inference (Penguin, London 1970)

4.56E.L. Lehmann: Theory of Point Estimation (Wiley, New York 1983)

4.57A.J. Weiss, E. Weinstein: Lower bounds in parameter estimation-Summary of results, in Proc. ICASSP 86, Tokyo, Japan (1986) pp. 569-572

4.58A. Graham: Kronecker Products and Matrix Calculus with Applications (Ellis Horwood, Chichester, UK 1981)

4.59A.J. Weiss, B. Friedlander: On the Cramer-Rao bo.und for direction finding of correlated signals. Technical Report, Signal Processing Technology, Ltd., Palo Alto, CA (1990)

4.60B. Ottersten, M. Viberg, T. Kailath: Analysis of Subspace Fitting and ML Techniques for Parameter Estimation from Sensor Array Data. IEEE Trans. SP-40, 590-600 (1992)

4.61T. SOderstrom, P. Stoica: System Identification (Prentice-Hall, International, London, UK 1989)

4.62R.H. Roy: ESPRIT, estimation ofsignal parameters via rotational invariance techniques. Ph.D. Thesis, Stanford University (1987)