Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Diss / 10

.pdf
Скачиваний:
143
Добавлен:
27.03.2016
Размер:
18.05 Mб
Скачать

Estimation of Stochastic Process Variance

483

of investigated and reference stochastic processes are the same, the bias of the considered variance estimate is zero. If the variance of the reference stochastic process can be controlled, the measurement is reduced to fixation of zero readings. This method is called the modulation method of measurement with zero reading. The modulation method with zero reading can be automated by a tracking device.

Determine the variation in the variance estimate of the modulation method assuming as before that Varβ 1 and neglecting oscillating terms under integration and taking into consideration that τcor τβ. In this case, the variance of the variance estimate of the investigated stochastic process can be presented in the following form:

Var {Vars*} =

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

32

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

π2T

(2k

 

1)2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k =1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

T

 

 

 

 

 

τ

 

 

 

2

 

 

 

 

2

 

 

 

 

 

2

 

 

×

 

 

 

1

 

 

Rs (τ)

+ Rs (τ) +

2Rs (τ)Rn (τ) + 2Rs

(τ)Rn (τ) + 4Rn

 

 

 

 

 

(τ) cos[(2k

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

64

 

 

 

2

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

2

 

+

 

 

 

 

 

 

Vars0

 

+ Vars + 2Vars0 Varn + 4Vars0 Varn + 4VarsVarn + 4Varn

 

 

π

2

 

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

T

 

 

 

 

τ

 

 

 

 

 

 

 

 

 

×

 

 

 

 

 

 

 

 

 

 

Rβ (τ)cos[(2k − 1)Ωτ]d

τ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

k =1

(2k − 1)

 

0

 

 

 

 

T

 

 

 

 

 

 

 

 

+

4

 

 

T 1

τ

 

Rs2 (τ) + 2Rs (τ)Rn (τ) + Rs2 (τ) + 2Rs

(τ)Rn (τ) dτ

 

 

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

0

 

 

 

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

T

 

 

τ

 

 

 

 

 

+

 

 

 

(Vars − Vars0 )

 

1 −

 

 

 

Rβ (τ)dτ.

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

As applied to the zero method of variance measurement, we have

− 1)Ωτ]dτ

(13.225)

Rs (τ) = Rs0 (τ) = r(τ)Vars cos ω0τ.

Taking into consideration (13.226), we obtain

Var {Vars*} =

32Vars2

 

 

 

 

 

 

 

 

 

 

 

 

1

 

T

 

 

 

 

τ

 

 

 

 

 

 

 

2

)

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

(1 + 2q + 2q

 

 

 

 

 

 

1

 

 

 

r

 

(τ) cos[(2k

− 1)Ωτ]dτ

 

2

 

 

 

 

 

 

 

2

 

 

 

 

 

 

π

T

 

 

 

 

 

 

 

 

 

k =1

 

(2k − 1)

 

0

 

 

 

 

T

 

 

 

 

 

128Vars2Varβ

 

 

 

 

 

 

 

 

 

 

1

 

 

T

 

 

 

 

 

τ

 

 

 

 

 

 

 

 

 

 

2

)

 

 

 

 

 

 

 

 

 

 

 

 

 

+

 

 

 

 

 

(1 + q

 

 

 

 

 

 

1

 

 

rβ (τ)cos[(2k − 1)Ωτ]dτ

 

 

 

 

2

 

 

 

 

 

 

2

 

 

 

 

 

 

 

π T

 

 

 

 

 

 

 

 

k =1

(2k − 1)

 

0

 

 

 

 

T

 

 

 

 

+

 

4Vars2 (1

+ 2q)

T

1

τ

 

 

2

(τ)dτ.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

r

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

T

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(13.226)

(13.227)

484

In the considered case, τcor τβ and τcor T0, we have

Var {Vars*} =

 

8Vars2

(1

+ q)2

T

1

τ

 

2

(τ)dτ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

r

 

 

 

 

 

 

 

 

T

 

 

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

256Vars2Varβ

 

 

 

 

 

 

 

1

 

T

 

 

 

 

 

 

 

 

 

 

 

2

)

 

 

 

 

 

+

 

 

 

 

(1

+ q

 

 

 

1

 

 

 

2

 

 

 

2

 

 

 

 

π

T

 

 

 

 

 

 

 

k =1

(2k − 1)

0

 

 

 

Signal Processing in Radar Systems

τ r (τ)cos[(2k − 1)Ωτ]dτ. (13.228)β

T

If, in addition to the foregoing statements, the random variations of the amplification coefficients are absent or the conditions τβ T and τβ T0 are satisfied, we can write

Var0 {Vars*}

8Vars2 (1 + q)2

T

2

 

 

 

r

 

(τ)dτ.

(13.229)

T

 

 

 

0

 

 

 

Comparing (13.229) and (13.162) at Varβ = 0 in the case of compensation method, we can see that due to the twofold decrease in the total time interval of observation of investigated stochastic process and presence of the reference stochastic process, the variance of the variance estimate under the modulation method is four times higher compared to the variance of the variance estimate measured via the compensation method at the same conditions of measurement.

When we use the deterministic harmonic signal

s0 (t) = A0 cos(ω0t + ϕ0 )

(13.230)

as the reference signal, we are able to obtain the variance of the variance estimate analogously. As applied to the modulation method of variance measurement with zero reading in the case of absence of the random variations of the amplification coefficients, the stochastic process variance estimate is unbiased and the variance of the variance estimate can be presented in the following form:

Var {Vars*} =

4Vars2

(1

+ 2q + 2q2 )

T

1

τ

2

(τ)dτ.

(13.231)

 

 

 

 

 

r

 

 

 

T

 

 

 

 

 

 

 

T

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

Comparison between (13.231) and (13.191) shows us that due to a twofold decrease in the total time interval of observation, the variation in the variance estimate obtained via the modulation method measurement is twice as high compared to the compensation method.

13.6  SUMMARY AND DISCUSSION

We summarize briefly the main results discussed in this chapter.

As we can see from (13.12), the variance of the optimal variance estimate is independent of the values of the normalized correlation function between the samples of observed stochastic process. This fact may lead to some results that are difficult to explain from the physical viewpoint or cannot be explained absolutely. As a matter of fact, by increasing the number of samples within the limits of the finite small time interval, we can obtain the estimate of variance with infinitely small estimate variance according to (13.12). The variance of optimal variance estimate of Gaussian stochastic process approaching zero is especially evident while moving from discrete to continuous observation of stochastic process.

Estimation of Stochastic Process Variance

485

The optimal variance estimate of Gaussian stochastic process based on a discrete sample is equivalent to the error given by (13.24) for the same sample size (the number of samples). This can be explained by the fact that under the optimal signal processing, the initial sample multiplies with the newly formed uncorrelated sample. However, if the normalized correlation function is unknown or known with error, then the optimal variance estimate of the stochastic process has the finite variance depending on the true value of the normalized correlation function. To simplify consideration and investigation of this problem, we need to compute the variance of the variance estimate of Gaussian stochastic process with zero mean by two samples applied to the estimate by the likelihood function maximum for the following cases: The normalized correlation function, or, as it is often called, the correlation coefficient, is completely known, unknown, and known with error.

Experimental statistical characteristics of the variance estimate obtained with sufficiently high accuracy are matched with theoretical values. The maximal relative errors of definition of the mathematical expectation estimate and the variance estimate do not exceed 1% and 2.5%, respectively. Table 13.1 presents the experimental data of the mathematical expectations and variance of the variance estimate at various values of the correlation coefficient ρ0 between samples. Also, the theoretical data of the variation in the variance estimate based on the algorithm given by (13.24) and determined in accordance with the formula (13.40) are presented for comparison in Table 13.1. Theoretical value of the variance of the optimal variance estimate is equal to Var{VarE} = 7.756, according to (13.5). Thus, the experimental data prove the previously discussed theoretical definition of estimates and their characteristics, at least at N = 2.

Difference between the transform characteristic and the square-law function can lead to high errors under definition of the stochastic process variance. Because of this, while using measurers to define the stochastic process variance, we need to pay serious attention to the transform performance. Verification of the transform performance is carried out, as a rule, by sending the harmonic signal of known amplitude at the measurer input. The methods discussed thus far that measure the variance assume an absence of limitations of instantaneous values of the investigated stochastic process. The presence of these limitations leads to additional errors while measuring the variance. The ideal integrator h(t) = T−1 plays the role of averaging or smoothing filter.

In the majority of cases, we need to define the variance estimate by investigating a single realization of stochastic process. For measuring the variance varying in time based on a single realization of stochastic process there are similar problems as in the course of estimation of the mathematical expectation varying in time. Actually, on one hand, to decrease the variance estimate caused by the finite observation time interval, the last must be as soon as large. On the other hand, there is a need to choose an integration time as short as possible for the best definition of variance variations. Evidently, there must be a compromise.

The simplest way to define the variance of time-varying stochastic process at the instant t0 is averaging the transformed input data of stochastic process within the limits of finite time interval. Thus, let x(t) be the realization of stochastic process ξ(t) with zero mathematical expectation. Measurement of variance of this stochastic process at the instant t0 is carried out by averaging ordinate quadrature ordinates x(t) within the limits of the interval about the given value of ­argument [t0 − 0.5T, t0 + 0.5T]. In this case, the variance estimate is defined by (13.106). Average of the variance estimate by realizations can be defined by (13.307). Thus, as in the case of time-varying mathematical expectation, the mathematical expectation of the estimate of the variance varying in time does not coincide with its true value in a general case, but it is obtained by smoothing the variance within the limits of finite time interval [t0 − 0.5T, t0 + 0.5T]. As a result of such averaging, the bias of variance of stochastic process can be presented by (13.108).

Comparing (13.187) with (13.162), we can see that in the case of comparison method, the variance of the variance estimate is twice as high compared to the variance of the variance estimate obtained via compensation method. This increase in the variance of the variance estimate is explained by the presence of the second channel because of which there is an increase in the total variance of the output signal. We need to note that although there is an increase in the variance of the variance estimate

486

Signal Processing in Radar Systems

for the considered procedure, the present method is a better choice compared to the compensation method if the variance of intrinsic noise of amplifier changes after compensation procedure. This is also true when the random variations of the amplification coefficient are absent. In doing so, the time interval of observation corresponding to the sensitivity threshold increases twofold compared to the time interval of observation of the investigated stochastic process while using the compensation method to measure variance.

Comparing (13.198) through (13.200) and (13.162), (13.166), and (13.169), we can see that the sensitivity of the correlation method of the stochastic process variance measurement is higher ­compared to the sensitivity of the compensation method. This difference is caused by the compensation of noise components with high order while using the correlation method and by the compensation­

of errors caused by random variations of the amplification coefficients. However, when the channels are not identical and there is a statistical relationship between the intrinsic receiver noise and random variations of the amplification coefficients there is an estimate bias and an increase in the variance of estimate, which is undesirable.

Comparing (13.219) and (13.162) at Varβ = 0 in the case of compensation method of variance measurement, we can see that the relative value of the variance of the variance estimate defining the sensitivity of the modulation method is twice as high compared to the relative variation in the variance estimate of the compensation method at the same conditions of measurement. Physically, this phenomenon is caused by a twofold decrease in the time interval of observation of the investigated stochastic process due to switching.

When the random variations of the amplification coefficients are absent, the process at the modulation measurer output can be calibrated using the values of difference between the variance of the investigated stochastic process and the variance of reference stochastic process. This measurement method is called the modulation method of variance measurement with direct reading. When the variances of the investigated and the reference stochastic processes are the same, the bias of the considered variance estimate is zero. If the variance of the reference stochastic process can be controlled, the measurement is reduced to fixation of zero readings. This method is called the modulation method of measurement with zero reading. The modulation method with zero reading can be automated by a tracking device.

Comparing (13.229) and (13.162) at Varβ = 0 in the case of compensation method of variance measurement, we can see that owing to the twofold decrease in the total time interval of observation of the investigated stochastic process and the presence of the reference stochastic process the variance of the variance estimate while using the modulation method is four times higher compared to the variance of the variance estimate of the compensation method at the same conditions of measurement.

Comparison between (13.231) and (13.191) shows us that owing to the twofold decrease in the total time interval of observation in the variance of the variance estimate while using the modulation method is two times higher compared to the compensation method.

REFERENCES

1.Slepian, D. 1958. Some comments on the detection of Gaussian signals in Gaussian noise. IRE Transactions on Information Theory, 4(2): 65–68.

2.Kay, S. 2006. Intuitive Probability and Random Processes Using MATLAB. NewYork: Springer Science + Business Media, LLC.

3.Vilenkin, S.Ya. 1979. Statistical Processing of Stochastic Functions Investigation Results. Moscow, Russia: Energy.

4.Esepkina, N.A., Korolkov, D.V., and Yu.N. Paryisky. 1973. Radar Telescopes and Radiometers. Moscow, Russia: Nauka.

14 Estimation of Probability

Distribution and Density

Functions of Stochastic Process

14.1  MAIN ESTIMATION REGULARITIES

Experimental definition of the one-dimensional probability distribution function F(x) and pdf p(x) is formed in the simplest way for ergodic stochastic processes and is based on an investigation of the stochastic process within the limits of the time interval [0, T]. Actually, a researcher observes the realization x(t) of a continuous ergodic process ξ(t). An example of the realization x(t) is shown in Figure 14.1a. According to the definition of ergodic stochastic process [1], the probability distribution function F(x) can be defined approximately in the following form (see Figure 14.1a):

N

F*(x) ≈ T1 τi , (14.1)

i=1

where F*(x) is the estimate of the probability distribution function F(x) defined by the total time when the value of the realization x(t) is below the level x within the limits of the observation interval [0, T].

It is natural to suppose that the larger the observation interval and the total time when the value of the realization x(t) is below the level x the closer the estimate F*(x) to the true value F(x). Thus, in the limiting case, when the observation interval [0, T] is large, the following equality

N

F(x) = P[ξ(t) x] = lim 1 τi (14.2)

T →∞ T i=1

is satisfied.

The approximation accuracy of the defined experiment estimate F*(x) to the true probability distribution function F(x) is characterized by the random variable

F( x) = F( x) − F ( x).

(14.3)

Based on (14.3), we obtain the bias of the probability distribution function estimate

b[F (x)] = F(x) = F(x) − F (x)

(14.4)

and the variance of the probability distribution function estimate

Var{F*(x)} = [F*(x) F*(x) ]2 ,

(14.5)

487

488

 

 

 

 

 

 

Signal Processing in Radar Systems

x(t)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

x

 

 

 

 

 

 

 

 

 

 

(a)

 

τ1

 

τ2

 

τ3

 

τ4

t

 

 

 

 

 

 

 

 

T

 

η(t)

 

1

(b)

t

T

FIGURE 14.1  (a) Realization of stochastic process within the limits of the observation interval [0, T]; (b) Sequence of rectangular pulses formed in accordance with the nonlinear inertialess transformation (14.13).

which are the functions both of the averaging time and of the reference level.

The one-dimensional pdf of ergodic stochastic process is defined based on the following relationship:

 

 

 

 

 

N

 

p(x) =

dF(x)

= lim

 

1

τi.

(14.6)

dx

 

x

 

T →∞ T

i=1

 

 

 

 

 

 

 

As we can see, (14.6) is the ratio between the probabilities of event that the values of stochastic process ξ(t) are within the limits of the interval [x − 0.5dx; x + 0.5dx] and dx, where dx is the interval length. In other words,

 

p[x(t)] =

P{(x − 0.5dx) ≤ x(t) ≤ (x + 0.5dx)}

.

(14.7)

 

 

 

 

τ′

dx

 

 

N

 

 

 

As we can see from (14.6),

 

i is the total time when the values of the realization x(t) of stochastic

 

 

 

 

i=1

 

 

 

process are within the limits of the interval [x ± 0.5dx] (see Figure 14.2a).

x(t)

 

x

 

x + 0.5Δx

x

 

x – 0.5Δx

 

 

 

 

 

t

(a)

 

T

χ(t)

 

1

 

t

(b)

T

FIGURE 14.2  (a) Realization of stochastic process within the limits of the observation interval [0, T] with quantization by amplitude within the limits of the interval [x ± 0.5∆x]; (b) Sequence of rectangular pulses formed in accordance with the nonlinear transformation (14.23).

Estimation of Probability Distribution and Density Functions of Stochastic Process

489

In practice, a measurement of the pdf is carried out within the limits of the fixed time interval [0, T]. The interval x takes the finite value and x 0. In this case, the pdf defined by experiment can be presented in the following form:

 

 

 

 

 

N

*

(x) =

 

1

 

τ′i

p

T

 

x

 

 

 

i=1

 

 

 

 

 

and is different from the true values of the pdf p(x) on the random variable defined as

p(x) = p(x) − p*(x).

The bias and variance of the pdf estimate can be presented in the following forms:

b[ p*(x)] = p(x) p*(x) ,

Var{p*(x)} = [ p*(x) p*(x) ]2 .

(14.8)

(14.9)

(14.10)

(14.11)

According to (14.8), the pdf is defined experimentally like the average value of the pdf within the limits of the interval [x ± 0.5dx]. Because of this, from the viewpoint of approximation of the measured pdf to its true value at the level x, it is desirable to decrease the interval x. However, a decrease in the x at the fixed observation time interval [0, T] leads to a decrease in time of the stochastic process within the limits of the interval x and, consequently, to an increase in the variance of the pdf estimate owing to a decrease in the statistical data sample based on which the decision

about the pdf p(x) magnitude is made. Therefore, there is an optimal value of the interval

x, under

which the dispersion of the pdf estimate

 

 

 

ˆ

2

[ p*(x)]+ Var{p*(x)}

(14.12)

D[ p*(x)] = b

takes the minimal magnitude at the fixed observation time T.

There are several ways to measure the total time of stochastic process observation below the given level or within its limits. Consider these ways applied to an experimental definition of the probability distribution function estimate F*(x). The first way is the direct measurement of total time when a magnitude of the realization x(t) of continuous ergodic process ξ(t) is below the fixed level x. In this case, the realization x(t) is transformed into the sequence of rectangular pulses η(t) (see the Figure 14.1b) with the unit amplitude and duration τi equal to the time when a magnitude of the realization x(t) of continuous ergodic process ξ(t) is below the fixed level x. This transformation takes the following form:

1

at

x(t) x,

 

 

(14.13)

η(t) =

 

0

at

x(t) > x.

 

 

 

Then, the area occupied by all rectangular pulses is equal by value to the total time when the realization x(t) of continuous ergodic process ξ(t) is below the fixed level x:

N

T

 

τi = η(t)dt.

(14.14)

 

 

i=1

0

 

490

Signal Processing in Radar Systems

As a result, the probability distribution function estimate takes the following form:

 

 

1

T

 

 

 

 

 

η(t)dt.

(14.15)

F

(t) = T

 

 

 

 

 

 

0

 

The nonlinear inertialess transformation (14.13) is carried out without any problems by corresponding limitations in amplitudes and threshold devices.

The second way to define and measure the probability distribution function is based on computation of the number of quantized by amplitude and duration of the sampled pulses. In this case, at first, the continuous realization x(t) of stochastic process ξ(t) is transformed by the corresponding pulse modulator into sampled sequence xi and comes in at the input of the threshold device with the threshold level x. Then, a ratio of the number of pulses Nx that do not exceed the threshold to the total number of pulses N corresponding to the observation time interval [0, T] is equal by value to the probability distribution function estimate, i.e.,

F

 

( x) =

Nx

.

(14.16)

 

N

 

 

 

 

 

The total number of pulses is associated with the observation time accurate within a single pulse by the following relationship:

N =

T

,

(14.17)

 

 

Tp

 

where Tp is the period of pulse repetition.

The number of pulses Nx that do not exceed the threshold x can be presented in the form of summation of the pulses with unit amplitude

 

 

 

N

 

 

Nx

= ηi ,

(14.18)

 

 

 

i=1

 

 

where

 

 

 

 

 

 

1

 

at

xi x,

 

 

 

 

 

 

 

(14.19)

ηi =

 

 

 

 

xi > x.

0

 

at

 

 

 

 

 

 

 

 

Then the probability distribution function estimate takes the following form:

 

 

 

 

 

 

N

 

F (x)

=

 

1

 

ηi.

(14.20)

 

N

 

 

 

i=1

 

 

 

 

 

 

 

The number of pulses N and Nx is defined by various analog and digital counters. The digital counters are preferable. In practice, it is convenient to employ the nonlinear transformations of the following form:

 

1

at

x(t) ≥ x,

η1

 

 

(14.21)

(t) = 1 − η(t) =

at

 

0

x(t) < x.

 

 

 

 

Estimation of Probability Distribution and Density Functions of Stochastic Process

491

In this case, we use the following probability distribution function:

F1(x) = 1 − F(x).

(14.22)

This statement is correct with respect to the discrete stochastic process. Experimental definition of the pdf of ergodic stochastic process is carried out in an analogous way. In the case of the first method, we carry out the following nonlinear transformation (see Figure 14.2b):

1

at

x − 0.5 x x(t) ≤ x + 0.5 x,

 

 

(14.23)

χ(t) =

 

0

at

x(t) > x(t) < x − 0.5 x, x + 0.5 x.

 

 

 

In the case of the second method, using a computation of pulses, we carry out the following nonlinear transformation:

1

at

x − 0.5 x xi

x + 0.5 x,

 

 

 

 

 

(14.24)

χi =

 

xi < x − 0.5 x,

xi > x + 0.5

0

at

x.

 

 

 

 

 

Then, an experimental measurement of the stochastic process pdf is reduced to the following procedures:

p (x) =

 

 

1

T χ(t)dt

(14.25)

T

x

 

 

 

 

 

 

0

 

and

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

p (x) =

 

 

1

 

 

χi.

(14.26)

 

N

x

 

 

i=1

 

 

 

 

 

 

 

 

14.2  CHARACTERISTICS OF PROBABILITY DISTRIBUTION FUNCTION ESTIMATE

Determine the bias and variance of the probability distribution function estimate based on the direct measurement of the total time when the realization x(t) of continuous ergodic process ξ(t) is below the fixed level x according to (14.15). Deviation of measured value of the probability distribution function estimate F*(x) with respect to the true value F(x) can be presented as follows:

F(x) =

1

T η(t)dt F(x).

(14.27)

T

0

 

According to (14.13), the mathematical expectation of realizations η(t) can be presented in the following form:

x

η(t) = P[ξ(t) ≤ x] = p( x′)dx′ = F( x).

(14.28)

 

−∞

 

492

Signal Processing in Radar Systems

As we can see from (14.27), the probability distribution function estimate is unbiased. In (14.28), p(x) is the one-dimensional pdf of the investigated stochastic process.

Define the correlation function of the probability distribution function estimate F*(x) at various levels x1 and x2:

 

 

 

1

T

T

 

 

F(x1)

F(x2 ) = RF (x1, x2 ) =

 

∫∫ η(t1)η(t2 ) dt1dt2 F(x1)F(x2 ).

(14.29)

T 2

 

 

 

 

0

0

 

The instantaneous function η(t1)η(t2) of the stochastic process obtained as a result of nonlinear operation given by (14.13) is numerically equal to the probability of joint event that ξ(t1) x1 and ξ(t2) x2, i.e., a value of the two-dimensional probability distribution function F(x1, x2; τ) at the points x1 and x2:

1 2

 

2

1

1

 

x1

x2

 

′ ′

′ ′

 

 

 

 

∫ ∫

 

 

 

F(x , x

; τ) = η(t1 )η(t

 

) = P[ξ(t

) ≤ x

, ξ(t2 ) ≤ x2 ] =

 

 

p2

(x1, x2

; τ)dx1dx2

,

(14.30)

−∞ −∞

where p (x′, x′ ; τ) is the two-dimensional pdf of the investigated stochastic process. In the case of

2 1 2

a stationary stochastic process, the two-dimensional pdf depends only on the absolute difference in time instants τ = t2 t1. For this reason, the instantaneous function given by (14.30) also depends only on the absolute difference in time instants τ = t2 t1. Based on this fact, the double integral in (14.29) can be transformed into a single integral. For this purpose, introduce new variables τ = t2 t1 and t1 = t and change the order of integration taking into consideration a parity of the twodimensional probability distribution function F(x1, x2; τ) = F(x1, x2; −τ). As a result, we obtain

T T η(t1)η(t2 ) dt1dt2 =

0 0

Substituting (14.31) into (14.29), we obtain

RF (x1, x2 ) =

2

T

1

τ

 

 

 

 

T

 

 

 

 

T

 

 

0

 

 

 

 

2T

T

 

1

τ

F(x1, x2; τ)dτ.

 

 

 

 

 

 

 

 

 

T

 

 

0

 

 

 

 

 

 

F(x1, x2 ; τ)dτ − F(x1)F(x2 ).

(14.31)

(14.32)

The variance of the probability distribution function estimate at the level x is defined by (14.32) substituting x1 = x2 = x:

Var{F (x)} =

2

T

1

τ

F(x, x; τ)dτ − F2 (x).

(14.33)

 

 

 

 

T

 

 

 

 

T

 

 

 

 

0

 

 

 

 

 

 

By analogous method, we can define characteristics under discrete transformation of the stochastic process. In this case, the deviation of the probability distribution function estimate with respect to its true value can be presented in the following form:

N

F(x) = N1 ηi F(x). (14.34)

i=1

Соседние файлы в папке Diss