Diss / 10
.pdf
Estimation of Mathematical Expectation |
435 |
Consider the statistical characteristics of discrete magnitudes of the mathematical expectation estimate Ei at the adaptive filter output in the stationary mode. The mathematical expectation of estimate Ei is defined as
|
N −1 |
|
Ei* |
= ∑Wj Ei− j − d . |
(12.381) |
j= 0
Changing Ei and Wi on their magnitudes from (12.360) and neglecting the sum of fast oscillating terms, we obtain
|
M |
1 |
|
|
|
|
|
Ei |
= 0.5P∑ |
|
|
[aµcos(iωµ) + bµsin(iωµ)], |
(12.382) |
||
|
|
||||||
qµ−1 |
+ 0.5P |
||||||
|
µ =1 |
|
|
|
|
|
|
where |
|
|
|
|
|
|
|
|
|
|
q = |
a2 + b2 |
|
(12.383) |
|
|
|
|
|
2σ2 |
|||
|
|
|
|
|
|||
is the SNR for the μth component (or harmonic) of the mathematical expectation. As we can see from (12.383), as the number of channels P tends to approach infinity, that is, P → ∞, the mathematical expectation estimate is unbiased.
By analogy, determining the second central moment that is defined completely by the centralized component of stochastic process, it is easy to define the variance of estimate
P−1 |
|
Var {Ei } = σ2 ∑Wj2. |
(12.384) |
j= 0
We can see from (12.384) that the variance of the investigated mathematical expectation estimate Ei decreases concomitant to a decrease in the number of harmonics M in the observed stochastic process. In the limiting case, given the high number of channels (P → ∞) the variance of the mathematical expectation estimate tends to approach zero in stationary mode; that is, the considered estimate of the periodically changing mathematical expectation Ei is consistent.
To illustrate the obtained results, a simulation of the described adaptation algorithm based on the example of the mathematical expectation estimate E(t) = acos(ωt) given in the form of readings Ei and when the period T0 corresponds to four sampling periods, that is, T0 = 4Ts, is carried out. As the centralized component x0i of stochastic process realization, the uncorrelated samples of Gaussian stochastic process are used. To obtain the uncorrelated samples in the main and reference channels, the delay corresponding to one sampling period (d = 1) is introduced. Initial magnitudes of the weight coefficients, except for the channel with j = 0, are chosen equal to zero. The initial magnitude of the weight coefficient at j = 0 is taken equal to unit, that is, W0[0] = 1.
The memory of microprocessor system continuously updates the discrete sample xi = x0i + Ei and in accordance with the algorithm given by (12.357) a tuning of the weight coefficients is carried out. The tuning is considered to be complete when the components of the vector differ from each other by no more than 10% on two neighboring steps of adaptation. Thus, the realization formed at the output of adaptive interference and noise canceller is investigated.
The normalized variance of estimate of the harmonic component amplitude, that is, Var{a*}/a2, at the SNR equal to unit, that is, q = 1, as a function of the number of adaptation cycles N and two values of the number of parallel channels (P = 4; P = 16) is depicted in Figure 12.20.
436 |
Signal Processing in Radar Systems |
|
Var(a*) |
|
|
|
a2 |
|
|
2.0 |
|
|
|
1.5 |
|
|
|
1.0 |
|
P = 16 |
|
|
P = 4 |
||
|
|
||
0.5 |
|
|
|
0 |
|
N |
|
103 |
104 |
||
102 |
FIGURE 12.20 The normalized variance of amplitude estimation as a function of the adaptation cycles number.
The dashed lines correspond to the theoretical asymptotical values of variance of the mathematical expectation estimate computed according to (12.378) and (12.384). As we can see from Figure 12.20, the adaptation process is evident and its beginning depends on the number of parallel channels of the adaptive filter.
12.11 SUMMARY AND DISCUSSION
Let us summarize briefly the main results discussed in this chapter.
In spite of the fact that the formulaes for the mathematical expectation estimate (12.41) and (12.42) are optimal in the case of Gaussian stochastic process, these formulae are also optimal for the stochastic process that is different from the Gaussian pdf in the class of linear estimations. Equations 12.38 and 12.39 are true if the a priori interval of changes of the mathematical expectation is not limited. Equation 12.38 allows us to define the optimal device structure to estimate the mathematical expectation of stochastic process (Figure 12.1). The main function is the linear integration of the received realization x(t) with the weight υ(t) that is defined based on the integral equation (12.8). The decision device issues the output process at the instant t = T. To obtain the current value of the mathematical expectation estimate, the limits of integration in (12.38) must be t − T and t, respectively, then the parameter estimation is given by (12.43).
The mathematical expectation estimate of the maximum likelihood of stochastic process is both the conditionally and unconditionally unbiased estimate. Conditional variance of the mathematical expectation estimate can be presented by (12.47), from which we can see that the variance of estimate is unconditional. Since, according to (12.38), the integration of Gaussian stochastic process is a linear operation, the estimate EE is subjected to Gaussian distribution.
The procedure to define the optimal estimate of mathematical expectation of stationary stochastic process is presented in Figure 12.1 in the case of the limited a priori domain of definition of the mathematical expectation. In this case, the maximum likelihood estimate of the mathematical expectation of stochastic process is conditionally biased. The unconditional estimate is unbiased and the unconditional dispersion is given by (12.76).
The Bayesian estimate of the mathematical expectation of stochastic process is a function of the SNR. At low SNR, the conditional estimate bias coincides with the approximation given by (12.82). We can see that the unconditional estimate of mathematical expectation averaged with
Estimation of Mathematical Expectation |
437 |
respect to all possible values E0 is unbiased. At high SNR, the Bayesian estimate of the mathematical expectation of stochastic process coincides with the maximum likelihood estimate of the same parameter.
Optimal methods to estimate the mathematical expectation of stochastic process envisage the need for accurate and complete knowledge of other statistical characteristics of the considered stochastic process. For this reason, as a rule, various nonoptimal procedures are used in practice. In doing so, the weight function is selected in such a way that the variance of estimate tends to approach asymptotically the variance of the optimal estimate. If the integration time of ideal integrator is sufficiently large in comparison with the correlation interval of stochastic process, then to determine the variance of mathematical expectation estimate of stochastic process there is a need to know only the values of variance and the ratio between the observation interval and correlation interval. The variance of the mathematical expectation estimate of stochastic process is proportional to the spectral density value of fluctuation component of the considered stochastic process at ω = 0 when the ideal integrator is used as a smoothing circuit. In other words, in the considered case, the variance of the mathematical expectation estimate of stochastic process is defined by the spectral components in the case of zero frequency. To obtain the current value of the mathematical expectation estimate and to investigate the realization of stochastic process within the limits of large interval of observation, we use the estimate given by (12.124). Evidently, this estimate has the same statistical characteristics as the estimate defined by (12.111). The previously discussed procedures that measure the mathematical expectation suppose that there are no limitations due to instantaneous values of the considered stochastic process in the course of measurement. Presence of limitations leads to additional errors while measuring the mathematical expectation of stochastic process.
The variance of the mathematical expectation estimate is defined by the interval of possible values of the additional stochastic sequence and is independent of the variance of the observed stochastic process and is forever more than the variance of the equidistributed estimate of the mathematical expectation by independent samples. For example, if the observed stochastic sequence subjected to the uniform pdf coinciding in the limiting case with (12.218), then the variance of the mathematical expectation for the considered procedure is defined by (12.226) and the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation is given by (12.227), that is, the variance of the mathematical expectation estimate in three times more than the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation under the use of additional stochastic signals in the considered limiting case when the observed and additional random sequences are subjected to the uniform pdf. At other conditions, a difference in variances of the mathematical expectation estimate is higher.
Under the definition of stochastic process quantization effect by amplitude on the estimate of its mathematical expectation, we assume that quantization can be considered as the inertialess nonlinear transformation with the constant quantization step and the number of quantization levels is so high that the quantized stochastic process cannot be outside the limits of staircase characteristic of the transform g(x), the approximate form of which is shown in Figure 12.12. The pdf p(x) of observed stochastic process possessing the mathematical expectation that does not match with the middle between the quantization thresholds xi and xi+1 is presented in Figure 12.12 for obviousness. The mathematical expectation of the realization y(t) forming at the output of inertialess element (transducer) with the transform characteristic given by (12.228) when the realization x(t) of the stochastic process ξ(t) excites the input of inertialess element is equal to the mathematical expectation of the mathematical expectation estimate given by (12.229) and is defined by (12.230). In general, the mathematical expectation of estimate E differs from the true value E0, that is, as a result of quantization we obtain the bias of the mathematical expectation of the mathematical expectation estimate given by (12.231). Since the mathematical expectation and the correlation function of process forming at the transducer output depend on the observed stochastic process pdf, the characteristics of the mathematical expectation estimate of stochastic process quantized by amplitude depend both on the correlation function and on the pdf of observed stochastic process.
438 |
Signal Processing in Radar Systems |
In the case of the time-varying mathematical expectation of stochastic process, the problem of definition of the mathematical expectation E(t) of stochastic process ξ(t) by a single realization x(t) within the limits of the interval [0, T] is reduced to an estimation of the coefficients αi of the series given by (12.256). In doing so, the bias and dispersion of the mathematical expectation estimate E*(t) of the observed stochastic process caused by measurement errors of the coefficients αi are given by (12.259) and (12.260). Statistical characteristics (the bias and dispersion) of the mathematical expectation estimate of stochastic process averaged within the limits of the observation interval are defined by (12.261) and (12.262), respectively. The higher the number of terms under expansion in series in (12.256) used for approximation of the mathematical expectation, the higher at the same conditions the variance of time-varying mathematical expectation estimate averaged within the limits of the observation interval. In doing so, there is a need to note that in general the number N of series expansion terms essentially increases parallel to an increase in the observation interval [0, T], within the limits of which the approximation is carried out.
Thus, at sufficiently large number of the eigenfunctions in the sum given by (12.256) the average time-varying mathematical expectation estimate E(t) is equal to the variance of the initial stochastic process. In doing so, the estimate bias caused by the finite number of terms in series given by (12.256) tends to approach zero. However, in practice, there is a need to choose the number of terms in series given by (12.256) in such a way that a dispersion of the time-varying mathematical expectation estimate caused both by the bias and by the estimate variance would be minimal.
Under definition of estimation of the time-varying mathematical expectation of stochastic process by a single realization, we meet difficulties caused by the definition of optimal time of averaging (integration) or the time constant of smoothing filter at the filter impulse response given before. In doing so, two conflicting requirements arise. On one hand, there is a need to decrease the variance of estimate caused by finite time interval of measuring; this time interval must be large. On the other hand, for better distinguishing the mathematical expectation variations in time, there is a need to choose the integration time to be as short as possible. Evidently, there is an optimal averaging time or the bandwidth of the smoothing filter under the given impulse response, which corresponds to minimal dispersion of the mathematical expectation estimate of stochastic process caused by the factors listed previously.
In practice, the nonstationary stochastic processes with time-varying mathematical expectation or variance or both of them simultaneously are widely used. In doing so, the mathematical expectation and variance vary slowly in comparison with variations of investigated stochastic process. In other words, the mathematical expectation and variance of stochastic process are constant within the limits of the correlation interval. In this case, to define the variance of the time-varying mathematical expectation estimate we can assume that the centralized stochastic process ξ0(t) = ξ(t) − E(t) is the stationary stochastic process within the limits of the interval t0 ± 0.5T with the correlation function given by (12.317).
In some applications, we can suppose that the time-varying mathematical expectation of stochastic process is the periodic function and the value of the period is unknown. At the same time, in a practical case, when the period is much longer than the correlation interval of the observed stochastic process is of interest to us. We employ the adaptive filtering methods widely used in practice under interference cancellation to measure. We consider the discrete sample that can be presented in the form discussed in (12.168). The estimated mathematical expectation is the periodical function and can be approximated by the Fourier series with finite number of terms. We can assume that the sampling interval is chosen in such a way that the sample readings remain uncorrelated with each other. Taking into consideration the orthogonality between the components of mathematical expectation and a definition of the covariance function (the ambiguity function) of the deterministic signal with constant component given by (12.362), the covariance matrix elements given by (12.358) can be presented in the form (12.363).
Estimation of Mathematical Expectation |
439 |
REFERENCES
1.Lindsey, J.K. 2004. Statistical Analysis of Stochastic Processes in Time. Cambridge, U.K.: Cambridge University Press.
2.Ruggeri, F. 2011. Bayesian Analysis of Stochastic Process Models. New York: Wiley & Sons, Inc.
3.Van Trees, H. 2001. Detection, Modulation, and Estimation Theory. Part 1. New York: Wiley & Sons, Inc.
4.Taniguchi, M. 2000. Asymptotic Theory of Statistical Inference for Time Series. New York: Springer + Business Media, Inc.
5.Franceschetti, M. 2008. Random Networks for Communication: From Physics to Information Systems. Cambridge, U.K.: Cambridge University Press.
6.Le Cam, L. 1986. Asymptotic Methods in Statistical Decision Theory. New York: Springer + Business Media, Inc.
7.Anirban DasGupta. 2008. Asymptotic Theory of Statistics and Probability. New York: Springer + Business Media, Inc.
8.Berger, J. 1985. Statistical Decision Theory and Bayesian Analysis. New York: Springer + Business Media, Inc.
9.Le Cam, L. and G.L. Yang. 2000. Asymptotics in Statistics: Some Basic Concepts. New York: Springer + Business Media, Inc.
10.Liese, F. and K.J. Miescke. 2008. Statistical Decision Theory: Estimation, Testing, and Selection. New York: Springer + Business Media, Inc.
11.Schervish, M. 1996. Theory of Statistics. New York: Springer + Business Media, Inc.
12.Lehmann, E.L. 2005. Testing Statistical Hypothesis, 3rd edn. New York: Springer + Business Media, Inc.
13.Jesbers, P., Chu, P.T., and A.A. Fettwers. 1962. A new method to compute correlations. IRE Transactions on Information Theory, 8(8): 106–107.
14.Mirskiy, G. Ya. 1972. Hardware Definition of Stochastic Process Characteristics, 2nd edn. Moscow, Russia: Energy.
15.Gusak, D., Kukush, A., Kulik, A., Mishura, Y., and A. Pilipenko. 2010. Theory of Stochastic Processes. New York: Springer + Business Media, Inc.
16.Gikhman, I., Skorokhod, A., and S. Kotz. 2004. The Theory of Stochastic Processes I. NewYork: Springer + Business Media, Inc.
17.Gikhman, I., Skorokhod, A., and S. Kotz. 2004. The Theory of Stochastic Processes II. New York: Springer + Business Media, Inc.
18.Tzypkin, Ya. 1968. Adaptation and Training in Automatic Systems. Moscow, Russia: Nauka.
19.Cox, D.R. and H.D. Miller. 1977. Theory of Stochastic Processes. Boca Raton, FL: CRC Press.
20.Brzezniak, Z. and T. Zastawniak. 2004. Basic Stochastic Processes. New York: Springer + Business Media, Inc.
21.Ganesan, S. 2009. Model Based Design of Adaptive Noise Cancellation. New York: VDM Verlag, Inc.
22.Zeidler, J.R., Satorius, E.H., Charies, D.M., and H.T. Wexler. 1978. Adaptive enhancement of multiple sinusoids in uncorrelated noise. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(3): 240–254.
442 |
Signal Processing in Radar Systems |
The random variables vi possess zero mean and are subjected to the Gaussian pdf. Prove that the random variables vi are dependent on each other and are mutually independent of the samples xp at p ≠ i. Actually,
N |
N |
N |
N |
|
vivq = ∑ CijCqp x j xp = σ2 |
∑Cij ∑Cqp jp = σ2 |
∑Cijδ jq = σ2Ciq; |
(13.7) |
|
j =1,p=1 |
j =1 |
p=1 |
j =1 |
|
N |
|
N |
|
|
xpvi = ∑Cij xj xp = σ2 ∑Cij jp = σ2δip , |
(13.8) |
|||
j =1 |
|
j =1 |
|
|
where σ2 is the true variance of the stochastic process ξ(t).
Determine the statistical characteristics of variance estimate, namely, the mathematical expectation and variance. The mathematical expectation of variance estimate is given by
N
E{VarE} = N1 ∑ Cij xi xj = σ2 , (13.9)
i=1, j=1
that is, the variance estimate is unbiased.
The variance of variance estimate can be presented in the following form:
|
1 |
N N |
|
|
Var{VarE} = |
∑ ∑ xi xj xp xq CijCpq − σ4. |
(13.10) |
||
|
||||
N 2 |
||||
|
|
i=1, j=1 p=1,q=1 |
|
Determining the mixed fourth moment of the Gaussian random variable x
xi x j xp xq = σ4[ ij pq + ip jq + iq jp ] |
(13.11) |
and substituting (13.11) into (13.10) and taking into consideration (13.4), we obtain
Var{VarE} = |
2σ4 |
(13.12) |
|
N . |
|||
|
As we can see from (13.12), the variance of the optimal variance estimate is independent of the values of the normalized correlation function between the samples of observed stochastic process. This fact may lead to some results that are difficult to explain with the physical viewpoint or cannot be explained absolutely. Actually, increasing the number of samples within the limits of the finite small time interval, we can obtain the estimate of variance with the infinitely small estimate variance according to (13.12). Approaching the variance of optimal variance estimate of Gaussian stochastic process to zero is especially evident while passing from discrete to continuous observation of stochastic process. Considering
N |
N |
|
N = ∑∑Cij ij |
(13.13) |
|
i=1 |
j=1 |
|
