Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / 10

.pdf
Скачиваний:
143
Добавлен:
27.03.2016
Размер:
18.05 Mб
Скачать

Estimation of Mathematical Expectation

373

As applied to the stationary stochastic process with the spectral density given by (12.19), we have that d = 1. For this reason, we can write

υ(t) =

α

+ b0δ(t) + c0δ(t T ).

2

Substituting (12.13) and (12.32) into (12.5), we obtain

 

T

 

0.5 αexp{−α | t − τ |}dτ + b0σ2 exp{−αt} + c0σ2 exp{−α(T t)} = 1.

 

0

 

(12.32)

(12.33)

Dividing the integration intervals on two intervals, namely, 0 ≤ τ < t and t ≤ τ ≤ T, after integration we obtain

(b0σ2 − 1)exp{−αt}+ (c0σ2 − 1) exp{−α(t T )} = 0.

(12.34)

This equality is correct if the coefficient of the terms exp{−αt} and exp{−α(t T)} is equal to zero, that is, b0 = c0 = σ−2. Substituting b0 and c0 into (12.32), we obtain the formula given by (12.17).

Now, consider the exponential function in (12.9). The formula

ρ12 = T s(t)υ(t)dt

(12.35)

0

 

is the deterministic component or, in other words, the signal when the estimated parameter E0 = 1. The random component

T

x0 (t)υ(t)dt

(12.36)

0

 

 

is the noise component. The variance of the noise component taking into consideration (12.8) is defined as

Tx00

 

2

T T

T

 

 

 

(t)υ(t)dt

 

= ∫∫

x0 (t1)x0 (t2 ) υ(t1)υ(t2 )dt1dt2 = s(t)υ(t)dt = ρ12.

(12.37)

 

 

 

 

 

 

 

0 0

0

 

As we can see from (12.37), ρ12 is the ratio between the power of the signal and the power of the noise. Because of this, we can say that (12.37) is the signal-to-noise ratio (SNR) when the estimated parameter value E0 = 1.

12.2  MAXIMUM LIKELIHOOD ESTIMATE OF MATHEMATICAL EXPECTATION

Consider the conditional functional given by (12.9) of the observed stochastic process. Solving the likelihood equation with respect to the parameter E, we obtain the mathematical expectation estimate

EE =

T x(t)υ(t)dt

 

0

.

(12.38)

T s(t)υ(t)dt

 

0

 

 

374

 

 

Signal Processing in Radar Systems

As applied to analysis of stationary stochastic process, (12.38) becomes simple, namely,

EE =

T x(t)υ(t)dt

 

0

.

(12.39)

T υ(t)dt

 

0

 

 

In doing so, at the condition T τcor−1 → ∞ we can neglect the values of stochastic process and its derivatives at t = 0 and t = T under estimation of the mathematical expectation; that is, in other words, we

can think that the following approximation is correct:

υ(t) = S−1(ω = 0).

(12.40)

In this case, we obtain the asymptotical formula for the mathematical expectation estimate of stationary stochastic process, namely,

EE = lim

1

T

x(t)dt,

(12.41)

 

T →∞ T

 

 

 

 

0

 

 

which is widely used in the theory of stochastic processes to define the mathematical expectation of ergodic stochastic processes with arbitrary pdf. At the large and finite values T τcor−1 , we can neglect an effect of values of the stochastic process and its derivatives at t = 0 and t = T on the mathematical expectation estimate. As a result, we can write

EE

1

T

x(t)dt.

(12.42)

T

0

 

 

Although the obtained formulae for the mathematical expectation estimate are optimal in the case of Gaussian stochastic process, these formulae will be optimal for the stochastic process differed from the Gaussian pdf in the class of linear estimations. Equations 12.38 and 12.39 are true if the a priori interval of changes of the mathematical expectation is not limited. Equation 12.38 allows us to define the optimal device structure to estimate the mathematical expectation of the stochastic process (Figure 12.1). The main function is the linear integration of the received realization x(t) with the weight υ(t) that is defined based on the solution of the integral equation (12.8). The decision device issues the output process at the instant t = T. To obtain the current value of the mathematical

x(t)

x(t)ν(t)dt

 

 

Decision

 

EE

 

 

 

 

device

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

υ(t)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

K(τ)

 

 

υ(t)

 

 

 

s(t)

Solution of

s(t)υ(t)dt

 

 

 

Equation 12.8

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

FIGURE 12.1  Optimal structure to define the mathematical expectation estimate.

Estimation of Mathematical Expectation

375

expectation estimate, the limits of integration in (12.38) must be t T and t, respectively. Then the parameter estimation is defined as

EE (t) =

t

x(τ)υ(τ)dτ

 

t T

 

 

(12.43)

t

s(τ)υ(τ)dτ .

 

 

t T

 

 

 

The weight integration can be done by the linear filter with corresponding impulse response. For this purpose, we introduce the function

υ(τ) = h(t − τ) or h(τ) = υ(t − τ)

(12.44)

and substitute this function into (12.43) instead of υ(t) introducing a new variable t τ = z. Then (12.43) can be transformed to the following form:

EE (t) =

T x(t z)h(z)dz

 

0

.

(12.45)

T s(t z)h(z)dz

 

0

 

 

The integrals in (12.45) are the output responses of the linear filter with the impulse response h(t) given by (12.44) when the filter input is excited by x(t) and s(t), respectively.

The mathematical expectation of estimate

 

1

T

 

EE =

 

x(t) υ(t)dt = E0 ,

(12.46)

ρ12

 

 

0

 

that is, the estimate of the maximum likelihood of the mathematical expectation of stochastic process is both the conditionally and unconditionally unbiased estimate. The conditional variance of the mathematical expectation estimate can be presented in the following form:

 

2

 

2

1

T

T

 

 

 

 

 

 

 

−2

 

(12.47)

Var{EE | E0} =

EE

EE

 

= ρ14

∫∫ x0 (t1)x0 (t2 ) υ(t1)υ(t2 )dt1dt2 = ρ1

,

 

 

 

 

 

 

 

 

0

0

 

 

that is, the variance of estimate is unconditional. Since, according to (12.38), the integration of Gaussian stochastic process is a linear operation, the estimate EE is subjected to the Gaussian distribution.

Let the analyzed stochastic process be a stationary process and possess the correlation function given by (12.13). Substituting instead of the function υ(t) its value from (12.17) into (12.47) and integrating with delta functions, we obtain

Var{EE} =

 

2

=

 

2

=

2

,

(12.48)

2

+ (T cor )

2

+ αT

2 + p

 

 

 

 

 

376

Signal Processing in Radar Systems

where p is a ratio between the time required to analyze the stochastic process and the correlation interval of the same stochastic process. In doing so, according to (12.38), the formula for the optimal estimate takes the following form:

EE =

x(0) + x(T ) + αT x(t)dt

 

 

 

0

.

(12.49)

2

+ p

 

 

 

If p 1, we have

Var{EE}

2 .

(12.50)

 

p

 

Formulae (12.48) and (12.49) can be obtained without determination of the pdf functional. For this purpose, the value defined by the following equation:

E = T h(t)x(t)dt

(12.51)

0

 

can be considered as the estimate. Here h(t) is the weight function defined based on the condition of unbiasedness of the estimate that is equivalent to

 

T h(t)dt = 1,

(12.52)

 

0

 

and minimization of the variance of estimate,

 

Var{E } = T T h(t1)h(t2 )R(t1,t2 )dt1dt2.

(12.53)

0

0

 

Transform the formula for the variance of estimate into a convenient form. For this purpose, introduce new variables in the double integral, namely,

τ = t2 t1 and t1 = z,

(12.54)

and change the order of integration. Taking into consideration that R(τ) = R(−τ), we obtain

T

T − τ

 

Var{E } = 2R(τ)

h(z)h(z + τ)dzdτ.

(12.55)

0

0

 

 

As was shown in Ref. [1], a definition of optimal form of the weight function h(t) is reduced to a solution of the integral Wiener–Hopf equation

T h(τ)R(τ − s)dτ − Varmin{E} = 0, 0 ≤ s T,

(12.56)

0

 

Estimation of Mathematical Expectation

377

where Varmin{E} is the minimal estimate variance, jointly with the condition given by (12.52). However, the solution of (12.56) is complicated.

Define the formula for an optimal estimate of mathematical expectation of the stationary stochastic process possessing the correlation function given by (12.14) and weight function given by (12.18). Substituting (12.18) into the formula for mathematical expectation estimate of the stochastic process defined as (12.38) and calculating the corresponding integrals, the following is obtained:

 

1

T

 

 

 

 

1

 

 

 

0

x(t)dt +

[x(0) + x(T )]+

[x′(0) − x′(t)]

 

 

EE =

T

ω02T

ω02T

.

(12.57)

 

 

 

 

1 +

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ω02T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In doing so, the variance of the mathematical expectation estimate is defined as

Var{EE} =

4ασ2

 

ω02T + 4α .

(12.58)

If α ω0 and ω0T  1, the formula for the mathematical expectation estimate of the stationary Gaussian stochastic process transforms to the well-known formula of the mathematical expectation definition of the ergodic stochastic process given by (12.42), and the variance of the mathematical expectation estimate is defined as

Var{EE}

4ασ2

.

(12.59)

 

 

ω02T

 

At ω1 = 0 (ω0 = α), the correlation function given by (12.14), can be transformed into the following form

R(τ) = σ2 exp{−α | τ |}(1 + α | τ |), τcor =

2

(12.60)

α

 

 

by limiting process. In particular, the given correlation function corresponds to the stationary stochastic process at the output of two RC circuits connected in series when the “white” noise excites the input. In this case, the formulae for the mathematical expectation estimate and variance take the following form:

1

T

 

2

 

 

 

 

 

 

1

 

 

 

0

x(t)dt +

[x(0) + x(T )]+

 

[x′(0) − x′(T )]

 

 

 

 

 

 

 

 

,

 

EE =

T

αT

 

α2T

(12.61)

 

 

 

 

1 +

 

4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

αT

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Var{EE} =

 

2

 

.

 

 

 

(12.62)

 

 

 

 

 

αT + 4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Relationships between the definition of the estimate and the estimate variance of the mathematical expectation of the stochastic processes with other types of correlation functions can be defined analogously.

378

Signal Processing in Radar Systems

As we assumed before, the a priori domain of definition of the mathematical expectation is not limited. Thus, we consider a domain of possible values of the mathematical expectation as a function of the mathematical expectation estimate. Let the a priori domain of definition of the mathematical expectation be limited both by the upper bound and by the lower bound, that is,

EL E EU.

(12.63)

In the considered case, the mathematical expectation estimate Ê cannot be outside the considered interval given by (12.63), even though it is defined as a position of the absolute maximum of the likelihood functional logarithm (12.9). The likelihood functional logarithm reaches its maximum at E = EE. As a result, when EE EL the likelihood functional logarithm becomes a monotonically decreasing function within the limits of the interval [EL, EU] and reaches its maximum value at E = EL. If EE EU, the likelihood functional logarithm becomes a monotonically increasing function within the limits of the interval [EL, EU] and, consequently, reaches its maximum value at E = EU. Thus, in the case of the limited a priori domain of definition of the mathematical expectation, the estimate of mathematical expectation of stochastic process can be presented in the following form:

 

EU

if

EE > EU ,

 

ˆ

 

EE

if

EL EE EU ,

(12.64)

E =

 

 

 

 

EE < EL .

 

 

EL

if

 

Taking into consideration the last relationship, the structure of optimal device for the mathematical expectation estimate determination in the case of the limited a priori domain of mathematical expectation definition can be obtained by the addition of a linear limiter with the following characteristic:

EU

if

z > EU ,

 

 

if

EL z EU ,

(12.65)

g(z) = z

 

 

z < EL

 

EL

if

 

to the circuit shown in Figure 12.1. Using the well-known relationships [2] to transform the Gaussian random variable pdf of by a nonlinear inertialess system with the chain characteristic g(z), we can define the conditional pdf of the mathematical expectation estimate as follows:

 

 

 

 

 

 

 

 

1

 

 

 

 

 

ˆ

E0 )

2

 

PLδ(EˆEL ) + PUδ( EˆEU ) +

 

 

 

 

exp

(E

 

 

at EL EE EU,

 

.Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

p(Eˆ| E0 ) =

ˆ

 

ˆ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

< EL ,

> EU.

 

 

 

 

 

 

 

 

 

 

 

 

(12.66)

0,

at E

E

 

 

 

 

 

 

 

 

 

 

 

 

Here

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

EL E0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

PL = 1

Q

 

 

 

 

 

,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(12.67)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

EL E0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

PU = Q

 

 

 

 

;

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Estimation of Mathematical Expectation

379

where

 

1

 

 

2

 

Q(z) =

 

exp{−0.5y }dy

(12.68)

 

 

z

 

 

 

 

is the Gaussian Q function [3,4]; Var(EE | E0) is the variance given by (12.47). The conditional bias is defined as

b(Eˆ| E0 ) = Eˆ E0 = (EˆE0 )p(Eˆ| E0 )dEˆ

−∞

= PL (EL E0 ) + PU (EU E0 )

 

 

 

 

 

Var(EE

 

 

 

(EL E0 )

2

 

 

 

 

| E0 )

 

 

 

+

2π

exp

 

 

 

exp

2Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(EU E0 )

2

 

 

 

 

 

 

 

.

(12.69)

2Var(EE | E0 )

 

 

 

 

 

 

Thus, in the case of the limited a priori domain of possible values of the mathematical expectation of stochastic process, the maximum likelihood estimate of the stochastic process mathematical expectation is conditionally biased. However, at small variance values of the maximum likelihood estimate of stochastic process mathematical expectation, that is, Var(EE |E0) 0, as it follows from (12.67) and (12.69), we obtain the asymptotical expression

lim b(EE | E0 ) = 0;

(12.70)

Var(EE |E0 )0

 

that is, at Var(EE | E0) 0, the maximum likelihood estimate of mathematical expectation of stochastic process is asymptotically unbiased. At the high variance values of the maximum likelihood estimate of stochastic process mathematical expectation, that is, Var(EE | E0) ∞, the bias of the maximum likelihood estimate of stochastic process mathematical expectation tends to approach

b(EE | E0 ) = 0.5(EL + EU − 2E0 ).

(12.71)

The conditional dispersion of the maximum likelihood estimate of stochastic process mathematical expectation is defined as

D(EE | E0 ) = ( EˆE0 )2 f (EE | E0 )dEˆ= PL (EL E0 )2 + PU (EL E0 )2 + Var(1 PU

−∞

 

 

 

 

 

 

 

 

 

 

Var(EE

 

 

 

(EL E0 )

2

 

 

 

 

| E0 )

 

 

 

+

2π

(EL E0 )exp

 

 

(EU E0 )exp

2Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

PL )

(EU E0 )2 . 2Var(E | E )

E 0

(12.72)

At small variance values of the maximum likelihood estimate of stochastic process mathematical expectation

Var(EE | E0 )

1 and

EL < E < EU

(12.73)

EU EL

 

380

Signal Processing in Radar Systems

if the limiting process is carried out at EL −∞ and EU ∞, the conditional dispersion of the maximum likelihood estimate of stochastic process mathematical expectation coincides with the variance of estimate given by (12.47). If the true value of the mathematical expectation coincides with one of two bounds of the a priori domain of possible values of the mathematical expectation, then the following approximation is true:

D(EE | E0 ) 0.5Var(EE | E0 );

(12.74)

that is, the dispersion of estimate is twice as less compared to the unlimited a priori domain case. With increasing variance of the maximum likelihood estimate of stochastic process mathematical expectation Var(EE | E0) ∞, the conditional dispersion of the maximum likelihood estimate of the stochastic process mathematical expectation tends to approach the finite value since PL = PU = 0.5

D(EE | E0 ) → 0.5[(EL E0 )2 + (EU E0 )2 ],

(12.75)

whereas the dispersion of the maximum likelihood estimate of stochastic process mathematical expectation within the unlimited a priori domain of possible values of the maximum likelihood estimate of stochastic process mathematical expectation is increased without limit as Var(EE | E0) ∞. It is important to note that although the bias and dispersion of the maximum likelihood estimate of stochastic process mathematical expectation are defined as the conditional values, they are nevertheless independent of the true value of the mathematical expectation E0 and are the unconditional estimates simultaneously.

Determine the unconditional bias and dispersion of maximum likelihood estimate of stochastic process mathematical expectation in the case of the limited a priori domain of possible estimate values. For this purpose, it is necessary to average the conditional characteristics given by (12.69) and (12.72) with respect to possible values of estimated parameter, assuming that the a priori pdf of estimated parameter is uniform within the limits of the interval [EL, EU]. In this case, we observe that the unconditional estimate is unbiased, and the unconditional dispersion is determined in the following form:

 

 

 

 

EU EL

 

 

 

 

 

 

 

 

 

2

 

 

EU EL

 

 

 

2Q

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

+

 

(E

U

E

L

)

Q

 

 

 

 

 

D(Eˆ) = Var 1

 

Var(EE | E0 )

 

 

3

 

 

 

Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(EU EL )

2

 

 

 

2Var(EE | E0 ) Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

1

exp

 

 

 

 

 

 

 

 

 

 

3 2π(EU EL )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2Var(EE | E0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

Var(EE | E0 )(EU EL )

 

(EU EL )2

 

 

 

exp

 

.

(12.76)

 

3 2π

 

 

 

 

 

2Var(EE | E0 )

 

At the same time, it is not difficult to see that at small values of the variance, that is, Var(EE | E0) 0, the unconditional dispersion transforms into a dispersion of the estimate obtained under the unlimited a priori domain of possible values, D(Ê) Var(EE | E0). Otherwise, at high values of variance, that is, Var(EE | E0) ∞, the dispersion of the estimate given by (12.47) increases without limit and the unconditional dispersion given by (12.76) has a limit equal to the average square of the a priori domain of possible values of the estimate, that is, (EU EL)2/3.

Estimation of Mathematical Expectation

381

12.3  BAYESIAN ESTIMATE OF MATHEMATICAL EXPECTATION: QUADRATIC LOSS FUNCTION

As before, we analyze the realization x(t) of stochastic process given by (12.2). The a posteriori pdf of estimated stochastic process parameter E can be presented in the following form:

 

 

 

 

 

 

T

 

E2

 

 

 

T

 

 

 

 

 

 

pprior (E) exp E

 

 

x(t)υ(t)dt

 

 

 

s(t)υ(t)dt

 

 

 

 

 

 

 

2

 

 

 

 

ppost (E) =

 

 

 

0

 

0

 

,

(12.77)

 

 

 

 

T

 

E2

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

pprior (E) exp E

 

x(t)υ(t)dt

 

 

 

s(t)υ(t)dt dE

 

 

 

 

 

2

 

 

 

 

 

−∞

 

 

0

 

 

 

0

 

 

 

where

pprior(E) is the a priori pdf of estimated stochastic process parameter υ(t) is the solution of the integral equation given by (12.8)

In accordance with the definition given in Section 11.4, the Bayesian estimate γE is the estimate minimizing the unconditional average risk given by (11.29) at the given loss function. As applied to the quadratic loss function defined as

(γ , E) = (γ − E)2 ,

(12.78)

the average risk coincides with the dispersion of estimate. In doing so, the Bayesian estimate γE is obtained based on minimization of the a posteriori risk at each fixed realization of observed data

γE = Eppost (E)dE.

(12.79)

−∞

 

To define the estimate characteristics, that is, the bias and dispersion, it is necessary to determine two first moments of the random variable γE. However, in the case of the arbitrary a priori pdf of estimated stochastic process parameter E, it is impossible to determine these moments in a general form. In accordance with this statement, we consider the discussed problem for the case of a priori Gaussian pdf of estimated parameter; that is, we assume [5]

 

1

 

(E Eprior )

2

 

 

 

 

 

 

 

 

 

pprior (E) =

 

exp

 

 

 

,

(12.80)

2πVarprior (E)

 

 

 

 

2Varprior (E)

 

 

 

 

 

 

 

 

 

 

where Eprior and Varprior(E) are the a priori values of the mathematical expectation and variance of the mathematical expectation estimate. Substituting (12.80) into the formula defining the Bayesian

estimate and carrying out the integration, we obtain

γE =

Varprior (E)T x(t)υ(t)dt + Eprior

 

0

 

 

.

(12.81a)

Varprior (E)

T

 

 

 

s(t)υ(t)dt + 1

 

 

0

 

 

 

It is not difficult to note that if Varprior(E) ∞, the a priori pdf of estimate is approximated by the uniform pdf of the estimate and the estimate becomes the maximum likelihood estimate (12.38).

382

Signal Processing in Radar Systems

In the opposite case, that is, Varprior(E) 0, the a priori pdf of estimate degenerates into the Dirac delta function δ(E Eprior) and, naturally, the estimate γE will match with Eprior.

The mathematical expectation of estimate can be presented in the following form:

γ E =

Varprior (E12E0 + Eprior

,

(12.81b)

 

 

Varprior (E12 + 1

 

where ρ12 is given by (12.35). In doing so, the conditional bias of the considered estimate is defined as

bE | E0 ) = γ E E0 =

Eprior E0

(12.82)

Varprior (E12 + 1 .

Averaging the conditional bias by all possible a priori values E0, we obtain that in the case of the quadratic loss function the Bayesian estimate for the Gaussian a priori pdf is the unconditionally unbiased estimate.

The conditional dispersion of the obtained estimate can be presented in the following form:

DE | E0 ) = (γ E E0 )2 =

(Eprior E0 )2 + Varprior2 (E12

.

(12.83)

 

 

{Varprior2 (E12 + 1}2

 

We see that the unconditional dispersion coincides with the unconditional variance and is defined as

Var(γE ) = DE ) =

Varprior (E)

 

Varprior (E12 + 1 .

(12.84)

If Varprior (E12 1, then the variance of the considered Bayesian estimate coincides with the variance of the maximum likelihood estimate given by (12.47). In the opposite case, if Varprior (E12 1, the variance of estimate tends to approach

Var(γ E ) ≈ Varprior (E){1 − Varprior (E12 }.

(12.85)

As applied to arbitrary pdfs of estimate, we can obtain the approximated formulae for the bias and dispersion of estimate. For this purpose, we can transform (12.77) by substituting the realization x(t) given by (12.2). Then, we can write

ET x(t)υ(t)dt E22 T s(t)υ(t)dt = ρ2S(E) + ρN(E),

0 0

where

ρ2 = E02ρ12;

S(E) = E(2E0 E) ;

2E02

T

N(E) = EE0ρ1 0 x0 (t)υ(t)dt.

(12.86)

(12.87)

(12.88)

(12.89)

Соседние файлы в папке Diss