Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

An Introduction to Statistical Signal Processing

.pdf
Скачиваний:
2
Добавлен:
10.07.2022
Размер:
1.81 Mб
Скачать

5.2. SECOND-ORDER LINEAR SYSTEMS I/O RELATIONS

285

equals the constant, m, and

 

 

 

EYn = m hk ,

(5.4)

k

which is the dc response of the filter times the mean. For reference we specify the precise limits for the two-sided random process where T = Z and for the one-sided input random process where T = Z+:

EYn = m

hk

, T = Z

(5.5)

 

k=0

 

 

 

n

 

 

 

T = Z+ .

 

EYn = m

hk ,

(5.6)

k=0

Thus, for weakly stationary input random processes, the output mean exists if the input mean is finite and the filter is stable. In addition, it can be seen that for two-sided weakly stationary random processes, the expected value of the output process does not depend on the time index n since then the limits of the summation do not depend on n. For onesided weakly stationary random processes, however, the output mean is not constant with time but approaches a constant value as n → ∞ if the filter is stable. Note that this means that if a one-sided stationary process is put into a linear filter, the output is in general not stationary!

If the filter is not stable, the magnitude of the output mean is unbounded with time. For example, if we set hk = 1 for all k in (5.6) then EYn = (n + 1)m, which very strongly depends on the time index n and which is unbounded.

Turning to the calculation of the output covariance function, we use equations (5.2) and (5.3) to evaluate the covariance with some bookkeeping as

KY (k, j) = E[(Yk − EYk)(Yj − EYj)]

m hm(Xj−m − mj−m)%

=

E $

n hn(Xk−n − mk−n)

 

 

 

 

=

 

 

 

hnhmE[(Xk−n − mk−n)(Xj−m − mj−m)]

n m

=hnhmKX(k − n, j − m) .

nm

(5.7)

286 CHAPTER 5. SECOND-ORDER MOMENTS

A careful reader might note the similarity between (5.7) and the corresponding matrix equation (4.28) derived during the consideration of Gaussian vectors (but true generally for covariance matrices of linear functions of random vectors).

As before, the range of the sums depends on the index set used. Since we have specified causal filters, the sums run from 0 to for two-sided processes and from 0 to k and 0 to j for one-sided random processes.

It can be shown that the sum of (5.7) converges if the filter is stable in the sense of (A.30) and if the input process has bounded variance; i.e., there is a constant σ2 < ∞ such that |KX(n, n)| < σ2 for all n (problem 5.19).

If the input process is weakly stationary, then KX depends only on the di erence of its arguments. This is made explicit by replacing KX(m, n)

by KX(m − n). Then (5.7) becomes

 

KY (k, j) = hnhmKX((k − j) (n − m)) .

(5.8)

n m

Specifying the limits of the summation for the one-sided and two-sided cases, we have that

 

 

 

; T = Z .

 

KY (k, j) =

hnhmKX((k − j) (n − m))

(5.9)

n=0 m=0

 

 

and

 

 

 

k

j

 

 

 

T = Z+ .

 

KY (k, j) =

hnhmKX((k − j) (n − m)) ;

(5.10)

n=0 m=0

If the sum of (5.9) converges (e.g., if the filter is stable and KX(n, n) = KX(0) < ∞), then two interesting facts follow: First, if the input random process is weakly stationary and if the processes are two-sided, then the covariance of the output process depends only on the time lag; i.e., KY (k, j) can be replaced by KY (k −j). Note that this is not the case for a one-sided process, even if the input process is stationary and the filter stable! This fact, together with our earlier result regarding the mean, can be summarized as follows:

Given a two-sided random process as input to a linear filter, if the input process is weakly stationary and the filter is stable, the output random process is also weakly stationary. The output mean and covariance functions are given by

EYn = m hk

(5.11)

k=0

 

5.2. SECOND-ORDER LINEAR SYSTEMS I/O RELATIONS

287

 

 

 

KY (k) =

hnhmKX(k − (n − m)) .

(5.12)

n=0 m=0

The second observation is that (5.8), (5.9), (5.10) or (5.12) is a double discrete convolution! The direct evaluation of (5.8), (5.9), and (5.10) while straightforward in concept, can be an exceedingly involved computation in practice. As in other linear systems applications, the evaluations of convolutions can often be greatly simplified by resort to transform techniques, as shall be considered shortly.

Continuous Time Systems

For each of the discrete time filter results there is an analogous continuous time result. For simplicity, however, we consider only the simpler case of two-sided processes. Let {X(t)} be a two-sided continuous time input random process to a linear time-invariant filter with impulse response h(t).

We can evaluate the mean and covariance functions of the output process in terms of the mean and covariance functions of the input random process by using the same development as was used for discrete random processes. This time we will have integrals instead of sums. Let m(t) and KX(t, s) be the respective mean and covariance functions of the input

process. Then the mean function of the output process is

 

EY (t) =

E[X(t − s)]h(s) ds =

m(t − s)h(s) ds .

(5.13)

The covariance function of the output random process is obtained by computations analogous to (5.7) as

 

 

 

KY (t, s) =

dα dβKX(t − α, s − β)h(α)h(β) .

(5.14)

Thus if {X(t)} is weakly stationary with mean m = m(t) and covariance function KX(τ), then

EY (t) = m h(t) dt

(5.15)

and

 

 

 

KY (t, s) =

dα dβKX((t − s) (α − β))h(α)h(β) .

(5.16)

In analogy to the discrete time result, the output mean is constant for a two-sided random process, and the covariance function depends on only the

288

CHAPTER 5. SECOND-ORDER MOMENTS

time di erence. Thus a weakly stationary two-sided process into a stable linear time-invariant filter yields a weakly stationary output process in both discrete and continuous time. We leave it to the reader to develop conclusions that are parallel to the discrete time results for one-sided processes.

Transform I/O Relations

In both discrete and continuous time, the covariance function of the output can be found by first convolving the input autocorrelation with the pulse response hk or h(t) and then convolving the result with the reflected pulse response h−k or h(−t). A way of avoiding the double convolution is found in Fourier transforms. Taking the Fourier transform (continuous or discrete time) of the double convolution yields the transform of the covariance function, which can be used to arrive at the output covariance function — essentially the same result with (in many cases) less overall work.

We shall show the development for discrete time, a similar sequence of steps provides the proof for continuous time by replacing the sums by integrals. Using (5.12),

Ff (KY )

 

hnhmKX(k − (n − m)) e−j2πfk

 

=

 

 

 

k

n m

 

e−j2πf(n−m)

 

 

=

 

hnhm

KX(k − (n − m))e−j2πf(k−(n−m))

 

n

m

k

 

=

hne−j2πfn hme+j2πfm F(KX)

 

 

 

n

m

 

=

Ff (KX)Ff (h)Ff (h ) ,

(5.17)

where the asterix denotes complex conjugate. If we define H(f) = Ff (h), the transfer function of the filter, then the result can be abbreviated for both continuous and discrete time as

Ff (KY ) = |H(f)|2Ff (KX).

(5.18)

We can also conveniently describe the mean and autocorrelation functions in the frequency domain. From (5.5) and (5.15) the mean mY of the output is related to the mean mY of the input simply as

mY = H(0)mX.

(5.19)

5.3. POWER SPECTRAL DENSITIES

289

Since KX(k) = RX(k)−|mX|2 and KY (k) = RY (k)−|mY |2, (5.18) implies that

Ff (RY − |mY |2) = |H(f)|2Ff (RX − |mX|2)

or

Ff (RY ) − |mY |2δ(0) = |H(f)|2 &Ff (RX) − |mX|2δ(0)'

=|H(f)|2Ff (RX) − |H(f)|2|mX|2δ(0)

=|H(f)|2Ff (RX) − |H(0)|2|mX|2δ(0),

where we have used the property of Dirac deltas that g(f)δ(f) = g(0)δ(f) (provided g(f) has no jumps at f = 0). Thus the autocorrelation function satisfies the same transform relation as the covariance function. This result is abbreviated by giving a special notation to the transform of an autocorrelation function: Given a weakly stationary process {X(t)} with autocorrelation function RX, the power spectral density of the process is defined by

 

 

R

 

(k)e−j2πfk

,

discrete time

SX(f) =

f (RX) =

 

X

 

j2πfτ

 

 

 

F

 

 

 

 

 

,

continuous time .

 

RX(τ)e

 

(5.20)

the Fourier transform of the autocorrelation function. The reason for the name will be given in the next section and discussed at further length later in the chapter. Given the definition we have now proved the following result.

If a weakly stationary process {X(t)} with power spectral density SX(f) is the input to a linear time invariant filter with transfer function H, then the output process {Y (t)} is also weakly stationary and has mean

mY = H(0)mX

(5.21)

and power spectral density

SY (f) = |H(f)|2SX(f).

(5.22)

This result is true for both discrete and continuous time.

5.3Power Spectral Densities

Under suitable technical conditions the Fourier transform can be inverted to obtain the autocorrelation function from the power spectral density. Thus

290

 

CHAPTER 5. SECOND-ORDER MOMENTS

the reader can verify from the definitions (5.20) that

 

1/2

 

 

 

 

 

 

 

RX(τ) = 1/2SX(f)ej2πfτ df

, discrete time, integer τ

 

j2πfτ

 

 

 

 

 

 

SX(f)e

 

, continuous time, continuous τ .

 

 

−∞

 

 

(5.23)

The limits of 1/2 to +1/2 for the discrete time integral correspond to the fact that time is measured in units; e.g., adjacent outputs are one second or one minute or one year apart. Sometimes, however, the discrete time process is formed by sampling a continuous time process at every, say, T seconds, and it is desired to retain seconds as the unit of measurement. Then it is more convenient to incorporate the scale factor T into the time units and scale (5.20) and the limits of (5.23) accordingly — i.e., kT replaces k in (5.20), and the limits become 1/2T to 1/2T .

Power spectral densities inherit the property of symmetry from autocorrelation functions. As seen from the definition in chapter 4, covariance and autocorrelation functions are symmetric (RX(t, s) = RX(s, t)). Therefore RX(τ) is an even function. From (5.20) it can be seen with a little juggling that SX(f) is also even; that is, SX(−f) = SX(f) for all f.

The reason for the name “power spectral density” comes from observing how the average power of a random process is distributed in the frequency domain. The autocorrelation function evaluated at 0 lag, PX = RX(0) = E(|X(t)|2) can be interpreted as the average power dissipated in a unit resistor by a voltage X(t). Since the autocorrelation is the inverse Fourier

transform of the power spectral density, this means that

 

PX =

SX(f) df,

(5.24)

that is, the total average power in the process can be found by integrating SX(f). Thus if SX were nonnegative, it could be considered as a density of power analogous to integrating a probability or mass density to find total probability or mass. For the probability and mass analogues, however, we know that integrating over any reasonable set will give the probability or mass of that set, i.e., we do not wish to confine interest to integrating over all possible frequencies. The analogous consideration for power is to look at the total average power within an arbitrary frequency band, which we do next. The fact that power spectral densities are nonnegative can be derived from the fact that the autocorrelation function is nonnegative definite (which can be shown in the same way it was shown for covariance functions) — a result known as Bochner’s theorem. We shall prove nonnegativity of the power spectral density as part of the development.

5.3. POWER SPECTRAL DENSITIES

291

Suppose that we wish to find the power of a process, say {Xt} in some frequency band f F . Then a physically natural way to accomplish this would be to pass the given process through a bandpass filter with transfer function H(f) equal to 1 for f F and 0 otherwise and then to measure the output power. This is depicted in Figure 5.1 for the special case of a frequency interval F = {f : f0 ≤ |f| < f0 + ∆f}. Calling the output

 

 

Xt

 

 

H(f)

 

 

Yt

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

H(f)

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

f

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

−f0 −f0

 

 

 

 

 

 

 

 

 

 

 

 

f0 f0 + ∆

 

E[Yt2] = f:f0≤|f|<f0+∆ SX(f) df.

 

Figure 5.1: Power spectral density

process {Yt}, we have from (5.24) that the output power is

RY (0) =

SY (f) df =

|H(f)|2SX(f) df = F SX(f) df . (5.25)

Thus to find the average power contained in any frequency band we integrate the power spectral density over the frequency band. Because the average power must be nonnegative for any choice of f0 and ∆f, it follows that any power spectral density must be nonnegative,i.e.,

SX(f) 0, all f.

(5.26)

To elaborate further, suppose that this is not true; i.e., suppose that SX(f) is negative for some range of frequencies. If we put {Xt} through a filter that passes only those frequencies, the filter output power would have to be negative — clearly an impossibility.

From the foregoing considerations it can be deduced that the name power spectral density derives from the fact that SX(f) is a nonnegative function that is integrated to get power; that is, a “spectral” (meaning frequency content) density of power. Keep in mind the analogy to evaluating probability by integrating a probability density.

292

CHAPTER 5. SECOND-ORDER MOMENTS

5.4Linearly Filtered Uncorrelated Processes

If the input process {Xn} to a discrete time linear filter with δ response {hk} is a weakly stationary uncorrelated process with mean m and variance σ2 (for example, if it is iid), then KX(k) = σ2δk and RX(k) = σ2δk + m2. In this case the power spectral density is easily found to be

SX(f) = σ2δke−j2πfk + m2δ(f) = σ2 + m2δ(f) ; all f ,

k

since the only nonzero term in the sum is the k = 0 term. The presence of the Dirac delta is due to the nonzero mean. When the mean is zero, this simplifies to

SX(f) = σ2, all f.

(5.27)

Because the power spectral density is flat in this case, in analogy to the flat electromagnetic spectrum of white light, such a process (a discrete time, weakly stationary, zero mean, uncorrelated process) is said to be white or white noise. The inverse Fourier transform of the white noise spectral density is found from (5.23) (or simply by uniqueness) to be RX(k) = σ2δk. Thus a discrete time random process is white if and only if it is weakly stationary, zero mean, and uncorrelated.

For the two-sided case we have from (5.12) that the output covariance

is

 

 

 

 

KY (k) = σ2 hnhn−k = σ2

hnhn−k ; T = Z ,

(5.28)

n=0

n=k

 

where the lower limit of the sum follows from the causality of the filter. If we assume for simplicity that m = 0, the power spectral density in this case reduces to

SY (f) = σ2|H(f)|2.

(5.29)

For a one-sided process, (5.10)

yields

 

k

 

 

 

hnhn−(k−j) ; T = Z+ .

 

KY (k, j) = σ2

(5.30)

n=0

Note that if k > j, then the sum can be taken over the limits n = k − j to k since causality of the filter implies that the first few terms are 0. If k < j, then all of the terms in the sum may be needed. The covariance for the one-sided case appears to be asymmetric, but recalling that hl is 0 for

5.4. LINEARLY FILTERED UNCORRELATED PROCESSES

293

negative l, we can write the terms of the sum of (5.30) in descending order to obtain

σ2(hkhj + hk−1hj−1 + . . . + h0hj−k)

if j ≥ k and

σ2(hkhj + hk−1hj−1 + . . . + hk−jh0)

if j ≤ k. By defining the function min(k, j) to be the smaller of k and j, we can rewrite (5.30) in two symmetric forms:

min(k,j)

KY (k, j) = σ2

hk−nhj−n ;

T = Z+

(5.31)

n=0

 

 

 

 

and

 

 

 

 

 

 

min(k,j)

 

 

 

 

KY (k, j) = σ2

 

 

 

 

(5.32)

hnh

n+|k−j|

.

n=0

The one-sided process is not weakly stationary because of the distinct presence of k and j in the sum, so the power spectral density is not defined.

In the two-sided case, the expression (5.28) for the output covariance function is the convolution of the unit pulse response with its reflection h−k. Such a convolution between a waveform or sequence and its own reflection is also called a sample autocorrelation.

We next consider specific examples of this computation. These examples point out how two processes — one one-sided and the other two sided — can be apparently similar and yet have quite di erent properties.

[5.1] Suppose that an uncorrelated discrete time two-sided random process {Xn} with mean m and variance σ2 is put into a linear filter with causal pulse response hk = rk, k ≥ 0, with |r| < 1. Let {Yn} denote the output process, i.e.,

Y

=

rkX

n−k

.

(5.33)

n

 

 

 

 

k=0

Find the output mean and covariance.

From the geometric series summation formula,

1

 

 

 

 

 

 

,

 

− | |

|r|k = 1

r

k=0

 

 

 

294

CHAPTER 5. SECOND-ORDER MOMENTS

and hence the filter is stable. From (5.4), (5.5), and (5.6)

EYn = m rk = 1 m− r ; n Z .

k=0

From (5.28), the output covariance for nonnegative k is

KY (k) =

σ2

rnrn−k

 

 

 

n=k

 

 

 

 

rk

=

σ2r−k

 

 

 

 

 

r2

(r2)n = σ2 1

 

n=k

using the geometric series formula. Repeating the development for negative k (or appealing to symmetry) we find in general the covariance function is

r|k|

KY (k) = σ2 1 − r2 ; k Z .

Observe in particular that the output variance is

σY2 = Ky(0) =

 

σ2

.

 

− r2

1

 

As |r| → 1 the output variance grows without bound. However, as long as |r| < 1, the variance is defined and the process is clearly weakly stationary.

The previous example has an alternative construction that demonstrates how two models that appear quite di erent can lead to the same thing. From (5.33) we have

 

Yn − rYn−1 =

 

 

rkXn−k − r

rkXn−1−k

 

k=0

k=0

 

 

 

 

= Xn + rkXn−k − r rkXn−1−k

 

k=1

k=0

=

Xn,

 

since the two sums are equal. This yields a di erence equation relating the two processes, expressing the output process Yn in a recursive form:

Yn = Xn + rYn−1.

(5.34)

Соседние файлы в предмете Социология