Diss / 10
.pdfEstimation of Mathematical Expectation |
383 |
The introduced function S(E) and N(E) can be called normalized signal and noise components, respectively. In doing so, they are normalized in such a way that the function S(E) can reach the maximum equal to 0.5 at E = E0:
S(E)max = S(E = E0 ) = 0.5. |
(12.90) |
The noise component N(E) has zero mathematical expectation, and its correlation function is defined as
N(E1)N(E2 ) = |
E1E |
2 |
. |
(12.91) |
2 |
|
|||
|
E0 |
|
|
|
At E = E0, the variance of noise component can be presented in the following form:
N2 (E0 ) = 1. |
(12.92) |
As a result, the Bayesian estimate of the mathematical expectation of stochastic process can be written in the following form:
γE = |
∫∞ Epprior (E)exp{ρ2S(E) + ρN(E)}dE |
|
|
−∞ |
|
||
|
. |
(12.93) |
|
∫∞ pprior (E)exp{ρ2S(E) + ρN(E)}dE |
|||
|
−∞ |
|
|
Consider two limiting cases: the weak and powerful signals or, in other words, the low and high SNR ρ2.
12.3.1 Low Signal-to-Noise Ratio (ρ2 1)
As we can see from (12.93), at low values of the SNR (ρ → 0), the exponential function tends to approach the unit and, as a result, the Bayesian estimate γE coincides with the a priori mathematical expectation
γE (ρ → 0) = γ0 = ∫∞ Epprior (E)dE = Eprior . |
(12.94) |
−∞ |
|
At finite values of the SNR, the difference γE − γ0 is not equal to zero. Closeness γE to γ0 at ρ 1 allows us to find the estimate characteristics if we are able to define a deviation of γE from γ0 in the form of corresponding approximations and, consequently, the deviation of γE from the true value of
the estimated parameter E0, since in general Eprior ≠ E0. At ρ 1, the estimate γE can be defined in the following approximated form [6]:
γE = γ0 + ργ1 + ρ2γ2 + ρ3γ3 + . |
(12.95) |
384 |
Signal Processing in Radar Systems |
Considering the exponential function exp{ρ2S(E) + ρN(E)} in (12.93) as a function of ρ, we can expand it in Maclaurin series by ρ. Then, neglecting the terms with an order of more than 4, we can write
∫∞ (γ E − E) pprior (E)exp{ρ2S(E) + ρN(E)}dE = ∫∞ (γ 0 − E + ργ 1 + ρ2γ 2 + ρ3γ 3 + ) pprior (E)
−∞ |
|
|
|
|
|
|
|
|
−∞ |
|
|
|
|
+ ρN(E) + |
1 |
2 |
[N |
2 |
(E) + 2S(E)]+ |
1 |
3 |
[N |
3 |
|
(12.96) |
× 1 |
2 |
ρ |
|
6 |
ρ |
|
(E) + 6N(E)S(E)]+ dE = 0. |
|||||
|
|
|
|
|
|
|
|
|
|
|
||
Equating with zero the coefficients of terms with the same order ρ, we obtain the formulae for corresponding approximations:
|
γ0 = ∫∞ Epprior (E)dE = Eprior ; |
(12.97) |
|
−∞ |
|
γ1 |
= ∫∞ (E − Eprior ) pprior (E)N(E)dE; |
(12.98) |
|
−∞ |
|
γ2 = ∫∞ (E − Eprior ) pprior (E)[0.5N2 (E) + S(E)]dE − γ1 ∫∞ pprior (E)N(E)dE; |
(12.99) |
|
−∞ |
−∞ |
|
γ 3 = 61 ∫∞ (E − Eprior ) pprior (E)[N3 (E) + 6N(E)S(E)]dE − γ 2 ∫∞ pprior (E)N(E)dE |
|
|
−∞ |
−∞ |
|
∞
|
2 |
(12.100) |
|
− γ 1 ∫ pprior (E)[0.5N (E) + S(E)]dE. |
|||
|
|||
−∞ |
|
|
|
To define the approximate values of bias and dispersion of the estimate, it is necessary to determine the corresponding moments of approximations γ1, γ2, and γ3. Taking into consideration that all odd moments of the stochastic process x0(t) are equal to zero, we obtain
γ 1 = γ 3 = 0, |
|
(12.101) |
||||
γ 2 = Varprior (E0 |
− Eprior ) |
, |
(12.102) |
|||
|
|
E02 |
|
|
|
|
2 |
Var2 |
|
|
|
|
|
prior |
|
|
|
|||
|
γ 1 = |
|
|
, |
|
(12.103) |
|
2 |
|
|
|||
|
|
E0 |
|
|
|
|
Estimation of Mathematical Expectation |
385 |
where
Varprior = ∫∞ E2 pprior (E)dE − Eprior2 |
(12.104) |
−∞ |
|
is the variance of a priori distribution.
Based on (12.101) through (12.104), we obtain the conditional estimate bias in the following form:
b(γ E | E0 ) = Eprior + ρ2Varprior |
E0 − Eprior |
− E0 = (Eprior − E0 )(1 − ρ12Varprior ). |
(12.105) |
|
|||
|
E02 |
|
|
Formula for the conditional bias coincides with the approximation given by (12.82) at low values of
SNR ρ12, that is, Varpriorρ12 1. We can see that the unconditional estimate of mathematical expectation that is averaged with respect to all possible values E0 is unbiased. The conditional dispersion of
estimate with accuracy of the order ρ4 and higher is defined in the following form:
D(γ E | E0 ) = (γ E − E0 )2 ≈ (Eprior − E0 )2 + ρ2 |
γ 12 |
+ 2(Eprior − E0 ) γ 2 |
. |
(12.106) |
|
|
|
|
|
Substituting the determined moments, we obtain |
|
|
|
|
D(γ E | E0 ) ≈ (Eprior − E0 )2 (1 − 2Varpriorρ12 )+ ρ12Varprior2 . |
|
(12.107) |
||
Averaging (12.107) by all possible values of estimated parameter E0 with the a priori pdf pprior(E0) matched with the pdf pprior(E), we can define the unconditional dispersion of the mathematical expectation estimate defined by approximation in (12.85)
12.3.2 High Signal-to-Noise Ratio (ρ2 1)
Bayesian estimate of the stochastic process mathematical expectation given by (12.93) can be written in the following form:
γE = |
∫∞ Epprior (E)exp{−ρ2Z(E)}dE |
|
|
−∞ |
, |
(12.108) |
|
∫∞ pprior (E)exp{−ρ2Z(E)}dE |
|||
|
−∞ |
|
|
where |
|
|
|
Z(E) = [S(EE ) + ρ−1N(EE )]− [S(E) + ρ−1N(E)]; |
(12.109) |
||
EE is the maximum likelihood estimate given by (12.38). We can see that at the maximum likelihood point E = EE, the function Z(E) reaches its minimum and is equal to zero, that is, Z(E) = 0.
At high values of the SNR ρ2, we can use the asymptotic Laplace formula [7] to determine the integrals in (12.108)
b |
2π |
|
|
|
λlim→∞ ∫ϕ(x) exp{λh(x)}dx ≈ |
exp{λh(x0 )}ϕ(x0 ), |
(12.110) |
||
λh′′(x0 ) |
||||
a |
|
|
|
386 |
Signal Processing in Radar Systems |
where a < x0 < b and the function h(x) has a maximum at x = x0. Substituting (12.110) into an initial equation for the Bayesian estimate (12.108), we obtain γE ≈ EE. Thus, at high values of the SNR, the Bayesian estimate of the stochastic process mathematical expectation coincides with the maximum likelihood estimate of the same parameter.
12.4 APPLIED APPROACHES TO ESTIMATE THE MATHEMATICAL EXPECTATION
Optimal methods to estimate the stochastic process mathematical expectation envisage the need for having accurate and complete knowledge of other statistical characteristics of the considered stochastic process. Therefore, as a rule, various nonoptimal procedures based on (12.51) are used in practice. In doing so, the weight function is selected in such a way that the variance of estimate tends to approach asymptotically the variance of the optimal estimate.
Thus, let the estimate be defined in the following form:
E* = ∫T h(t)x(t) dt. |
(12.111) |
|||
|
|
0 |
|
|
The function of the following form |
|
|
|
|
|
T −1 |
if |
0 ≤ t ≤ T, |
|
|
|
|
|
(12.112) |
h(t) = |
|
|
|
|
0 |
if |
t < 0, |
t > T, |
|
|
|
|
|
|
is widely used as the weighted function h(t). In doing so, the mathematical expectation estimate of stochastic process is defined as
E* = |
1 |
∫T |
x(t)dt. |
(12.113) |
T |
||||
0 |
|
|
||
Procedure of the estimate definition given by (12.113) coincides with approximation in (12.42) that was delivered based on the optimal rule of estimation in the case of large interval in comparison with the interval of correlation of the considered stochastic process. A device operating according to the rule given by (12.113) is called the ideal integrator.
The variance of mathematical expectation estimate is defined as
|
1 |
T T |
|
|
Var(E*) = |
|
∫∫R(t2 − t1)dt1dt2. |
(12.114) |
|
T 2 |
||||
|
|
0 |
0 |
|
We can transform the double integral introducing the new variables, namely, τ = t2 − t1 and t2 = t. Then,
|
1 |
T |
0 |
|
T − t |
|
|
|
|
|
|
|
|
|
|
||
Var(E*) = |
|
|
∫ |
∫ |
R(τ)dτ + |
∫ |
|
(12.115) |
T |
2 |
|||||||
|
|
|
|
R(τ)dτ dt. |
||||
|
|
|
0 |
− t |
|
0 |
|
|
Estimation of Mathematical Expectation |
|
|
|
|
|
|
|
|
|
|
|
387 |
|
|
|
|
t |
|
|
T |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
||
|
t = –τ |
|
|
|
|
|
|
t = T– τ |
|
||||
|
|
|
|
|
|
|
|
|
|
|
τ |
|
|
|
–T |
|
0 |
|
|
|
|
|
T |
|
|||
FIGURE 12.2 Integration domains. |
|
|
|
|
|
|
|
|
|
|
|
|
|
The integration domain is shown in Figure 12.2. Changing the integration order, we obtain |
|
||||||||||||
|
|
2 |
T |
|
|
τ |
|
|
|
||||
|
Var(E*) = |
|
∫ |
|
1 |
− |
|
|
R(τ)dτ. |
(12.116) |
|||
T |
|
||||||||||||
|
|
|
|
|
T |
|
|
|
|||||
|
|
|
0 |
|
|
|
|
|
|
|
|
|
|
If the interval of observation [0, T] is much more than the correlation interval τcor, we can change the upper integration limit in (12.116) by infinity and neglect the integrand term τ/T in comparison with unit. Then
Var(E |
* |
) = |
2Var(EE | E0 ) |
∞ |
(12.117) |
|
T |
∫R(τ)dτ. |
|
||
|
|
|
|
0 |
|
If the normalized correlation function R(τ) is not a sign-changing function of the argument τ, the formula (12.117) takes a simple and obvious form:
Var(E*) = |
2Var(EE | E0 ) |
τcor . |
(12.118) |
|
T |
|
|
Consequently, if the ideal integrator integration time is sufficiently large in comparison with the correlation interval of stochastic process, then to determine the variance of mathematical expectation estimate of stochastic process there is a need to know only the values of variance and the ratio between the observation interval and correlation interval.
In the case of the sign-changing normalized correlation function with the argument τ, we can write
∫T |
| R(τ) | dτ > ∫∞ R(τ)dτ. |
(12.119) |
0 |
0 |
|
Thus, at T τcor the formula for the variance of the mathematical expectation estimate in the case of arbitrary correlation function of stationary stochastic process can be written in the following form:
Var(E*) ≤ |
2Var(EE | E0 ) |
τcor . |
(12.120) |
|
T |
|
|
Estimation of Mathematical Expectation |
|
|
|
|
391 |
||||
|
|
Var(E*) |
|
|
|
|
|
|
|
100 |
|
σ2 |
|
|
|
|
|
λ = 1.0 |
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
0.5 |
|
|
|
|
|
|
|
|
|
0.2 |
|
10–1 |
|
|
|
|
|
|
|
0.1 |
|
|
|
|
|
|
|
|
|
0.05 |
|
|
|
|
|
|
|
|
|
0.02 |
|
10–2 |
|
|
|
|
|
|
|
0.01 |
|
|
|
|
|
|
|
|
5 × 10–3 |
||
|
|
|
|
|
|
|
|
||
10–3 |
|
|
|
|
|
|
|
|
p |
|
|
|
|
|
|
|
|
|
|
1 |
2 |
5 |
10 |
20 |
50 |
100 200 |
500 103 |
|
|
FIGURE 12.3 Normalized variance of the mathematical expectation estimate versus p at various values of λ.
Dependences of the normalized mathematical expectation estimate variances Var(E*)/σ2 versus the ratio p between the observation time interval and the correlation interval for the ideal integrator (the continuous line) and the RC filter (the dashed lines) are presented in Figure 12.3, where λ serves as the parameter. As we can see from Figure 12.3, in the case of the ideal integrator, the variance of estimate decreases proportionally to the increase in the observation time interval, but in the case of the RC filter, the variance of estimate is limited by the value given by (12.141). Taking into consideration λ 1, the normalized variance of the mathematical expectation estimate is limited by the value equal to the ratio between the correlation interval and the RC filter time constant in the limiting case.
It is worthwhile to compare the mathematical expectation estimate using the ideal integrator with the optimal estimate, the variance of which Var(EE) is given by (12.48). Relative increase in the variance using the ideal integrator in comparison with the optimal estimate is defined as
κ = |
Var1(E*) − Var(EE ) |
= |
(2 + p)[ p − 1 + exp{− p}] |
− 1. |
(12.142) |
|
Var(EE ) |
p2 |
|||||
|
|
|
|
Relative increase in the variance as a function of T/τcor is shown in Figure 12.4. As we can see from Figure 12.4, the relative increase in the variance of the mathematical expectation estimate of
χ
0.15
0.10
0.05
0 |
p |
|
1 2 5 10 20 50 100 200 500 103
FIGURE 12.4 Relative increases in variance as a function of T/τcor.
392 |
Signal Processing in Radar Systems |
stochastic process possessing the correlation function given by (12.13) using the ideal integrator is less than 0.01, in comparison with the optimal estimate. At the same time, the maximum relative increase in the variance is 0.14 and corresponds to p ≈ 2.7. This maximum increase is caused by a rapid decrease in the optimal estimate variance in comparison with the estimate obtained by the ideal integrator at small values of the observation time interval. However, as p → ∞, both estimates are equivalent, as it was expected.
Consider the normalized variances of the mathematical expectation estimates of stochastic process using the ideal integrator for the following normalized correlation functions that are widely used in practice. We analyze two RC filters connected in series and the “white” noise excites the input of this linear system. In this case, the normalized correlation function takes the following form:
R(τ) = (1 + α | τ |)exp{−α | τ |}, α = |
1 |
|
(12.143) |
RC . |
|||
|
|
|
|
In doing so, the normalized variance of the mathematical expectation estimate of stochastic process is defined as
Var3 |
(E*) |
= |
2[2 p1 − 3 + (3 + p1) exp{− p1}] |
, |
(12.144) |
|
σ2 |
p12 |
|||||
|
|
|||||
where
p1 = αT = |
2T |
and |
τcor = |
2 |
. |
(12.145) |
τcor |
|
|||||
|
|
|
α |
|
||
The set of considered stochastic processes has the normalized correlation functions that are approximated in the following form:
R(τ) = exp{−α | τ |}cos ϖτ. |
(12.146) |
Depending on the relationships between the parameters α and ϖ, the normalized correlation function (12.146) describes both the low-frequency (α ϖ) and high-frequency (α ϖ) stochastic processes. The normalized variance of the mathematical expectation estimate of stochastic process with the normalized correlation function given by (12.146) takes the following form:
Var4 |
(E*) |
= |
2[ p1 |
(1 + η2 ) − (1 − η2 )] + 2 exp{− p1}[(1 − η2 )cos p1η − 2ηsin p1η] |
, |
(12.147) |
|
σ2 |
|
p12 |
(1 + η2 )2 |
||||
|
|
|
|
||||
where η = ϖα−1. At ϖ = 0 (η = 0) in (12.147), as in the particular case, we obtain (12.138); that is, we obtain the normalized variance of the mathematical expectation estimate of stochastic process with the exponential correlation function given by (12.13) under integration using the ideal integrator. In this case, (12.147) is essentially simplified at ϖ = α:
Var4 |
(E*) |
= |
p1 |
− exp{− p1}sin p1 |
, at ϖ = α. |
(12.148) |
σ2 |
|
p12 |
||||
|
|
|
|
|||
