Diss / 10
.pdf
Estimation of Mathematical Expectation |
|
|
|
|
|
|
|
393 |
|||||
|
Vari(E*) |
|
|
|
|
|
|
|
|
|
|
|
|
|
σ2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Var3(E*) |
|
||||
|
|
|
|
|
|
|
|
|
|
||||
100 |
|
|
|
|
|
|
|
|
|
|
σ2 |
|
|
|
|
|
|
|
|
|
|
Var4(E*) |
|
||||
|
|
|
|
|
|
|
|
|
|
|
σ2 |
|
|
|
|
|
|
|
|
|
|
|
Var5(E*) |
|
|||
|
|
|
|
|
|
|
|
|
|
|
σ2 |
|
|
10–1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
η = 5 |
|
|
|
|
η = 1 |
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|||
10–2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
10–3 |
η = 10 |
|
|
|
|
|
|
|
|
|
p1 |
||
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
2 |
5 |
10 |
20 |
50 |
100 |
200 |
500 |
103 |
||||
FIGURE 12.5 Normalized variances given by (12.144), (12.147), and (12.149) as functions of p1 with the parameter η.
As applied to the correlation function given by (12.14), the normalized variance of the mathematical expectation estimate of stochastic process is defined by
Var (E*) |
|
2 2 p1 |
(1+ η12 )− (3 − η12 ) + 2 exp{− p1} (3 − η12 )cos p1η1 |
− 3η1 |
+ η1−1 sin( p1η1) |
|
5 |
= |
|
|
|
|
, |
σ2 |
|
p12 (1 + η12 )2 |
|
|
||
|
|
|
|
|
||
(12.149)
where η1 = ϖ1α−1. As ϖ1 → 0 (η1 → 0), the correlation function given by (12.14) can be written as the correlation function given by (12.143), and the formula (12.149) is changed to (12.144).
The normalized variances of the mathematical expectation estimate of stochastic process given by (12.144), (12.147), and (12.149) as a function of the parameter p1 with the parameter η, are shown in Figure 12.5. As expected, at the same value of the parameter p1, the normalized variance of the mathematical expectation estimate of stochastic process decreases corresponding to an increase η characterizing the presence of quasiharmonical components in the considered stochastic process.
Discussed procedures to measure the mathematical expectation assume that there are no limitations of instantaneous values of the considered stochastic process in the course of measurement. Presence of limitations leads to additional errors while measuring the mathematical expectation of stochastic process.
Determine the bias and variance of estimate applied both to the symmetrical inertialess signal limiter (see Figure 12.6) and to the asymmetrical inertialess signal limiter (see Figure 12.7) when the input of the signal limiter is excited by the Rayleigh stochastic process. In doing so, we assume that the mathematical expectation is defined according to (12.113) where we use y(t) = g[x(t)] instead of x(t) and g(x) as the characteristic functions of transformation. The variance of the mathematical expectation estimate of stochastic process is defined by (12.116) where under the correlation function R(τ) we should understand the correlation function Ry(τ) defined as
Ry (τ) = ∫∞ ∫∞ g(x1 )g(x2 ) p2 (x1, x2 ; τ)dx1dx2 − Ey2 . |
(12.150) |
−∞ −∞ |
|
394 |
Signal Processing in Radar Systems |
|
y |
|
|
a |
|
–a |
x |
|
a |
||
|
||
|
–a |
FIGURE 12.6 Symmetric inertialess signal limiter performance.
y |
a |
x |
a |
FIGURE 12.7 Asymmetric inertialess signal limiter performance.
Let the Gaussian stochastic process excite the input of nonlinear device (Figure 12.6) and the transformation be described by the following function:
a |
if |
x > a, |
|
|
|
|
|
|
|
|
|
if |
−a ≤ x ≤ a, |
(12.151) |
y = g( x) = x |
||||
|
−a |
if |
x < − a. |
|
|
|
|||
|
|
|
|
|
The bias of estimate is defined as
|
|
∞ |
− a |
a |
∞ |
|
b(E |
* |
) = ∫ g(x) p(x)dx − E0 |
= −a ∫ p(x)dx + ∫ xp(x)dx + a∫ p(x)dx − E0. |
(12.152) |
||
|
|
|||||
|
|
−∞ |
−∞ |
− a |
a |
|
396 Signal Processing in Radar Systems
Computing the integral in the braces, we obtain
|
|
|
|
∞ |
1 |
|
|
|
|
2 2 |
T |
|
|
|
τ |
|
|
|
|
||
|
|
2 |
|
|
(ν−1) |
|
(ν−1) |
|
|
|
|
ν |
|
|
|||||||
Var(E |
|
) = σ |
|
|
|
[Q |
|
(χ − q) − Q |
|
(−χ − q)] |
|
|
|
1 |
− |
|
|
|
|
(τ)dτ. |
(12.160) |
|
|
∑ ν! |
|
|
T |
|
|
|
|||||||||||||
|
|
|
|
|
|
|
|
|
∫ |
|
|
T |
|
|
|
|
|||||
|
|
|
|
ν=1 |
|
|
|
|
|
|
|
0 |
|
|
|
|
|
|
|
|
|
As χ → ∞, the derivatives of the Gaussian Q function tend to approach zero. As a result, only the term at ν = 1 remains and we obtain the initial formula (12.116).
In practical applications, the stochastic process measurements are carried out, as a rule, under the conditions of “weak” limitations of instantaneous values, that is, under the condition ( χ − q ) ≥ 1.5 ÷ 2. In this case, the first term at ν = 1 in (12.160) plays a very important role:
Var(E ) ≈ [1 − Q(χ − q) − Q(χ + q)] |
2 |
2σ2 T |
|
|
τ |
|
|
|
|
1 |
− |
|
(τ)dτ, |
(12.161) |
|
|
|
||||||
|
|
T ∫ |
|
|
T |
|
|
|
|
0 |
|
|
|
|
|
where, at sufficiently high values, that is, ( χ − q ) ≥ 3, the term in square brackets is very close to unit and we may use (12.116) to determine the variance of mathematical expectation estimate of stochastic process.
In practice, the Rayleigh stochastic processes are widely employed because this type of stochastic processes have a wide range of applications. In particular, the envelope of narrow-band Gaussian stochastic process described by the Rayleigh pdf can be presented in the following form:
z(t) = x(t) cos[2πf0t + ϕ(t)], |
(12.162) |
where
x(t) is the envelope
φ(t) is the phase of stochastic process
Representation in (12.162) assumes that the spectral density of narrow-band stochastic process is concentrated within the limits of narrow bandwidth f with the central frequency f0 and the condition f0 f. As applied to the symmetrical spectral density, the correlation function of stationary narrow-band stochastic process takes the following form:
Rz (τ) = σ2 (τ) cos(2πf0τ). |
(12.163) |
In doing so, the one-dimensional Rayleigh pdf can be written as
|
x |
|
x2 |
|
|
|
||
f (x) = |
|
|
exp − |
|
|
|
, x ≥ 0. |
(12.164) |
σ |
2 |
2σ |
2 |
|||||
|
|
|
|
|
|
|
||
The first and second initial moments and the normalized correlation function of the Rayleigh stochastic process can be presented in the following form:
|
ξ(t) = |
π |
σ2 |
, |
||||
|
||||||||
2 |
||||||||
|
|
|
|
|
|
|
||
|
ξ |
2 |
|
2 |
, |
(12.165) |
||
|
|
(t) = 2σ |
||||||
|
ρ(τ) ≈ 2 (τ). |
|
||||||
|
|
|||||||
|
|
|
|
|
|
|
|
|
Estimation of Mathematical Expectation |
397 |
As applied to the Rayleigh stochastic process and nonlinear transformation (see Figure 12.7) given as
a |
if |
x > a, |
|
|
(12.166) |
y = g(x) = |
|
|
x |
if |
0 ≤ x ≤ a, |
|
|
|
the bias of the mathematical expectation estimate takes the following form:
b(E ) = Ey − E0 = ∫∞ g(x) f (x)dx − E0 = 2πσ2 Q(χ), |
(12.167) |
0 |
|
where χ is given by (12.155). As χ → ∞, the mathematical expectation estimate, as it would be expected, is unbiased.
Determining the variance of mathematical expectation estimate of stochastic process in accordance with (12.116) and (12.150) is very difficult in the case of Rayleigh stochastic process. It is evident that to determine the variance of mathematical expectation estimate of stochastic process in the first approximation, the formula (12.116) has to be true, provided the condition χ ≥ 2–3 is satisfied, which is analogous to the Gaussian stochastic process at weak limitation.
12.5 ESTIMATE OF MATHEMATICAL EXPECTATION
AT STOCHASTIC PROCESS SAMPLING
In practice, we use digital measuring devices to measure the parameters of stochastic processes after sampling. Naturally, we do not use a part of information that is outside a sample of stochastic process.
Let the Gaussian stochastic process ξ(t) be observed at some discrete instants ti. Then, there are a set of samples xi = x(ti), i = 1, 2,…, N at the input of digital measuring device. As a rule, a sample clamping of the observed stochastic process is carried out over equal time intervals = ti+1 − ti. Each sample value can be presented in the following form:
xi = Ei + x0i = Esi + x0i |
(12.168) |
as in (12.2), where Ei = Esi = Es(ti) is the mathematical expectation and x0i |
= x0 (ti ) is the realization |
of the centralized Gaussian stochastic process at the instant t = ti. A set of samples xi are characterized by the conditional N-dimensional pdf
fN (x1,…, xN E) = |
|
−0.5N |
|
|
N |
N |
|
(12.169) |
|
|
exp −0.5∑∑(xi − Ei )(xj − Ej )Cij , |
||||||
|
|
1 |
|
|
|
|
|
|
|
(2π) |
|
det Rij |
|
i=1 |
j=1 |
|
|
|
|
|
|
|
|
|
|
|
where
det Rij is the determinant of the correlation matrix Rij = R of the N × N order
Cij is the elements of the matrix Cij = C, which is the reciprocal matrix with respect to the correlation matrix and the elements Cij are defined from the following equation:
N |
1 |
if |
i = j, |
∑ |
|
|
(12.170) |
Cil Rlj = δij = |
if |
||
0 |
i ≠ j. |
||
l =1 |
|
|
|
398 |
Signal Processing in Radar Systems |
The conditional multidimensional pdf in (12.169) is the multidimensional likelihood function of the parameter E of stochastic process. Solving the likelihood equation with respect to the parameter E, we obtain the formula for the mathematical expectation estimate of stochastic process:
∑iN=1, j =1 xisjCij |
(12.171) |
|
EE = ∑iN=1, j =1 sisjCij . |
||
|
||
This formula can be written in a simple form if we introduce the weight coefficients |
|
|
N |
|
|
υi = ∑sjCij , |
(12.172) |
|
j =1 |
|
|
which satisfy, as the function υ(t) given by (12.7), the system of equations |
|
|
N |
|
|
∑Rilυl = si , i = 1, 2,…, N. |
(12.173) |
l =1
In doing so, the mathematical expectation estimate can be presented in the following form:
EE = |
∑N xiυi |
|
i=1 |
|
|
∑N siυi . |
(12.174) |
|
|
i=1 |
|
The mathematical expectation of estimate takes the following form:
EE = |
∑N xi υi |
= E0. |
|
i=1 |
(12.175) |
||
∑N si υi |
|||
|
i=1 |
|
|
The variance of estimate in accordance with (12.172) can be presented in the following form:
Var(EE ) = |
∑iN=1, j=1 Rijυiυj |
= |
1 |
. |
(12.176) |
|||
|
|
|
2 |
N |
||||
|
∑ |
N |
|
|
|
∑i=1 siυi |
|
|
|
siυi |
|
|
|
|
|||
|
|
i=1 |
|
|
|
|
|
|
The weight coefficients are determined using a set of linear equations:
|
2 |
υ1 |
+ R1υ2 |
+ R2υ3 + + RN −1υN = s1, |
|||
|
σ |
||||||
|
|
|
2 |
υ2 |
+ R1υ3 + + RN − 2υN = s2 |
, |
|
|
R1υ1 + σ |
||||||
|
|
|
|
|
|
|
(12.177) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
υN |
= sN , |
RN −1υ1 + RN − 2υ2 + RN −3υ3 + + σ |
|||||||
400 |
Signal Processing in Radar Systems |
The determinant of this matrix and its reciprocal matrix are defined in the following form [10]:
|
|
|
det Rij = σ2 N (1 − ψ 2 )N −1, |
|
|
|
|
(12.185) |
||||||
|
|
|
|
|
|
1 |
− ψ |
0 |
… |
0 |
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
−ψ |
1 + ψ 2 |
−ψ |
… |
0 |
|
|
|
|
Cij = |
|
1 |
|
|
|
0 |
−ψ |
1 + ψ 2 |
… |
0 |
|
. |
(12.186) |
|
|
|
|
|
|
|
|||||||||
(1 |
− ψ |
2 )σ2 |
||||||||||||
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
0 |
0 |
0 |
… |
1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It is important to note that all elements of the reciprocal matrix are equal to zero, except for the elements of the main diagonal and the elements flanking the main diagonal from right and left. As we can see from (12.172) and (12.186), the optimal values of weight coefficients are defined as
|
υ1 = υN = |
1 |
|
; |
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
(1 + ψ)σ |
2 |
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 − ψ |
(12.187) |
|
|
υ2 = υ3 = |
= υN −1 |
= |
|
|
. |
|||
|
|
|
|
||||||
(1 |
+ ψ)σ |
2 |
|||||||
|
|
|
|
|
|
|
|
||
Substituting the obtained weight coefficients into (12.174) and (12.176), we have
EE = |
(x1 + xN ) + (1 − ψ)∑iN=−21 xi |
(12.188) |
||||
|
|
|
|
; |
||
N − (N − 2)ψ |
|
|
||||
|
|
|
|
|
||
Var(EE ) = σ2 |
1+ ψ |
|
. |
|
(12.189) |
|
N − (N − |
2)ψ |
|
||||
|
|
|
|
|
||
Dependence of the normalized variance of the optimal mathematical expectation estimate versus the values ψ of the normalized correlation function between the samples and the various numbers of samples N is shown in Figure 12.8. As we can see from Figure 12.8, starting from ψ ≥ 0.5, the variance of estimation increases rapidly corresponding to an increase in the value of the normal-
ized correlation function, which tends to approach the variance of the observed stochastic process as ψ → 1.
We can obtain the formulae (12.188) and (12.189) by another way without using the maximum likelihood method. For this purpose, we suppose that
N |
|
E* = ∑xihi |
(12.190) |
i=1
Estimation of Mathematical Expectation |
|
|
|
|
401 |
||||
|
|
Var(EE) |
|
|
|
|
|
|
|
0.5 |
|
σ2 |
|
|
|
N = 2 |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
N = 5 |
|
|
||
|
|
|
|
|
|
|
|
||
0.2 |
|
|
|
|
|
N = 10 |
|
|
|
0.1 |
|
|
|
N = 20 |
N = 50 |
|
|
|
|
5 × 10–2 |
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
||
2 × 10–2 |
|
|
|
|
N = 100 |
|
|
|
|
|
|
|
|
|
|
|
|
||
10–2 |
|
|
|
N = 200 |
|
|
|
|
|
5 × 10–3 |
|
|
|
N = 500 |
|
|
|
||
|
|
|
|
|
|
|
|
||
2 × 10–3 |
|
|
|
N = 1000 |
|
|
Ψ |
||
10–3 |
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
0.02 |
0.05 |
0.1 |
0.2 |
0.5 |
1.0 |
|
|
0.01 |
|
||||||||
FIGURE 12.8 Normalized variance of the optimal mathematical expectation estimate versus ψ and the number of samples N.
can be used as the estimate, where hi are the weight coefficients satisfying the following condition
N |
|
∑hi = 1 |
(12.191) |
i=1
for the unbiased estimations. The weight coefficients are chosen from the condition of minimization of the variance of mathematical expectation estimate. As applied to observation of stationary stochastic process possessing the correlation function given by (12.13), the weight coefficients hi are defined in Ref. [11] and related with the obtained weight coefficients (12.187) by the following relationship:
hi = |
υi |
|
. |
(12.192) |
∑N |
υi |
|||
|
i=1 |
|
||
In the limiting case, as → 0 the formulae in (12.188) and (12.189) are changed into (12.48) and (12.49), respectively. Actually, as → 0 and if (n − 1) = T = const and exp{−αΔ} ≈ 1 − αΔ, the summation in (12.188) is changed by integration and x1 and xN are changed in x(0) and x(T), respectively. In practice, the equidistributed estimate (the mean) given by (12.182) is widely used as the mathematical expectation estimate of stationary stochastic process that corresponds to the constant weight coefficients hi ≈ N−1, i = 1, 2,…, N given by (12.190).
Determine the variance of the mathematical expectation estimate assuming that the samples are equidistant from each other on the value . The variance of the mathematical expectation estimate of stochastic process is defined as
|
1 |
N |
1 |
N |
|
|
|
Var(E ) = |
∑ R(t1 − t j ) = |
∑ R[(i − j) |
]. |
(12.193) |
|||
|
|
||||||
N 2 |
N 2 |
||||||
|
|
i=1, j =1 |
|
i=1, j =1 |
|
|
402 |
Signal Processing in Radar Systems |
|
|
|
|
5N |
|
|
|
|
|
|
|
|
4 |
|
|
|
|
|
|
|
|
3 |
|
|
|
|
|
|
|
|
2 |
|
|
|
|
|
|
|
|
1 |
|
|
|
|
|
|
|
|
|
|
|
|
l |
–4 |
–3 |
–2 |
–1 |
0 |
1 |
2 |
3 |
4 |
FIGURE 12.9 Domain of indices.
The double summation in (12.193) can be changed in a more convenient form. For this purpose, there is a need to change indices, namely, l = i − j and j = j, and change the summation order in the domain shown in Figure 12.9. In this case, we can write
Var(E*) = |
2 |
N N − j |
2 |
|
|
N −1 |
|
= |
σ |
2 |
N −1 |
|
|
(i |
|
|
∑∑ (l ) = |
Nσ |
2 |
+ 2∑(N − i) (i |
) |
1 |
+ 2∑ 1 − |
|
) , |
||||||||
1 |
|
1 |
|
|
|
|
|
|
|
i |
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
N |
|
j =1 l = − j |
N |
|
|
i=1 |
|
|
N |
i=1 |
N |
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
(12.194)
where (i ) is the normalized correlation function of observed stochastic process. As we can see from (12.194), if the samples are not correlated the formula (12.183) can be considered as a particular case.
If the correlation function of observed stochastic process is described by (12.13), the variance of the equidistributed estimate of mathematical expectation is defined as
Var(E |
* |
) = σ |
2 |
N(1 − ψ 2 ) + 2ψ(ψ N − 1) |
, |
(12.195) |
|
|
|
N2 |
(1 − ψ)2 |
||||
|
|
|
|
|
|
||
where, as before, ψ = exp{−αΔ}. We have just obtained (12.195) by taking into consideration the formula for summation [12]
N −1 |
a − [a + (N − 1)r]q |
N |
|
N −1 |
) |
|
|
|
∑(a + ir)qi = |
|
+ |
rq(1 − q |
. |
(12.196) |
|||
|
|
2 |
|
|||||
i= 0 |
1 |
− q |
|
|
1 − q |
|
|
|
|
|
|
|
|
|
|
|
|
Computations made by the formula (12.195) show that the variance of the equidistributed estimate of mathematical expectation differs from the variance of the optimal estimate (12.189). Figure 12.10 represents a relative increase in the variance of the equidistributed estimate of mathematical expectation in comparison with the variance of the optimal estimate
ε = |
Var(E ) − Var(EE ) |
(12.197) |
|
Var(EE ) |
|||
|
|
