Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Intermediate Physics for Medicine and Biology - Russell K. Hobbie & Bradley J. Roth

.pdf
Скачиваний:
112
Добавлен:
10.08.2013
Размер:
20.85 Mб
Скачать

11.13 Noise

307

 

 

 

 

 

 

 

 

 

y

 

y

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ycalc

 

 

 

 

 

 

j

 

 

 

 

 

 

 

 

 

(a)

 

 

 

 

 

120

 

 

 

 

 

 

 

 

Φ

80

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

40

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

0

2

4

6

8

10

12

14

16

 

Hobbie Fig. 11.23

 

 

 

 

 

 

 

 

k

 

 

 

 

 

 

 

 

 

(b)

 

 

 

 

FIGURE 11.36. (a) The solid line shows function yj that was calculated from a one-dimensional random walk with a Gaus- sian-distributed step length. The dashed line shows the function calculated from the Fourier coe cients of yj based on the first half of the time interval. It does not fit the second half of the function. This is characteristic of random functions.

(b) The power spectrum calculated from the first half of yj . The zero-frequency component has been suppressed because it depends on the starting value of y.

A very important property of noise can be seen from the data shown in Fig. 11.36(a). The data consist of 460 discrete values that appear to have several similar peaks. A discrete Fourier transform of the first 230 values gives fairly large values for the first few coe cients ak and bk. Yet these values of ak and bk fail to describe subsequent values of yj . The reason is that the yj are actually random. In this case they are the net displacement after j steps in a random walk in which each step length is Gaussian distributed with standard deviation σ = 5. The Fourier transform of a random function does not exist. We can apply the recipe to the data and calculate the coe cients. But if we apply the same recipe to some other set of data points from the random function we get di erent values of the coe cients, although the sum of their squares, (a2k + b2k )1/2, would be nearly the same. The sum of the squares of the coe cients is plotted in Fig. 11.36(b). It is the phases that change randomly, while the amount of energy at a particular frequency remains constant or fluctuates slightly about some average value.

Noise is not periodic, but neither is it a pulse. It has fi- nite power, but it will have infinite energy if the noise goes on “forever.” To describe noise we must use averages, cal-

FIGURE 11.37. Some possible autocorrelation functions of noise.

culated over a time interval that is “long enough” so that the average does not change. Suppose that we are measuring the electrical potential between a pair of electrodes on the scalp. Assume that there is no obvious periodicity, and we think it is noise. If we measure the potential for only a few milliseconds, we will get one average value. If we measure for the same length of time a few minutes later, we may get a di erent average. But if we average for two or three minutes, then a repetition gives almost the same average.

In general, random signals may vary with time in such a way that this average changes. (If we repeat the measurements on the scalp in a few hours, the averages may be di erent.) We will assume that properties such as the mean and standard deviation and power spectrum do not change with time, so that if we average over a “long enough” interval and repeat the average at a later time, we get the same result. Processes that generate data with these properties are called stationary. We limit our discussion to stationary random processes.

The correlation functions are not particularly useful for well-defined periodic signals, but they are very useful to describe noise or a signal that is contaminated by noise. (In fact, they allow us to detect a periodic signal that is completely hidden by noise. The technique is described in the next section.)

Space limitations require us to state some properties of the autocorrelation function of noise without proof, though the results are plausible. Many discussions of noise are available. An excellent one with a biological focus is by DeFelice (1981).

The autocorrelation function is given by Eq. 11.45:

 

 

 

 

1

T

φ11

(τ ) = y1

(t)y1

(t + τ ) = Tlim

 

y1(t)y1(t+τ ) dt.

2T

 

 

 

→∞

 

−T

The properties of the autocorrelation function depend on the details of the noise. Some possible shapes for the autocorrelation function are shown in Fig. 11.37.

The following properties of the autocorrelation function can be proved:

1.The autocorrelation function is an even function of τ . This follows from the definition.

308 11. The Method of Least Squares and Signal Analysis

2.The autocorrelation function for τ = 0, φ11(0), measures the average power in the signal. This also follows from the definition.

3.For a random signal with no constant or periodic components, the autocorrelation function goes to zero as τ → ∞. This is plausible, since for large shifts, if the signal is completely random, there is no correlation.

4.The autocorrelation function has its peak value at τ = 0. This is also plausible, since for any shift of a random signal there will be some loss of correlation.

11.14Correlation Functions and Noisy Signals

11.14.1 Detecting Signals in Noise

The autocorrelation function is useful for detecting a periodic signal in the presence of noise. We assume that the system that measures these is linear: the response to two simultaneous signals is the sum of the responses to each individually. Section 11.18 will consider what happens when the response is non-linear. Suppose that the periodic signal is s(t), the random noise is n(t), and the average of both is zero. The combination of signal and noise is

y(t) = s(t) + n(t).

(11.73)

The autocorrelation of the combination is

φyy (τ ) = [s(t) + n(t)] [s(t + τ ) + n(t + τ )]

=s(t)s(t + τ ) + s(t)n(t + τ )

+n(t)s(t + τ ) + n(t)n(t + τ ) .

Each term in the average can be identified as a correlation function:

φyy (τ ) = φss(τ ) + φsn(τ ) + φns(τ ) + φnn(τ ).

Since the noise is random, the cross correlations φns and φsn should be zero if the averages were taken over a sufficiently long time. Therefore,

φyy (τ ) = φss(τ ) + φnn(τ ).

(11.74)

The autocorrelation of a periodic signal is periodic in τ , while the autocorrelation of the noise approaches zero if τ is long enough.

If we suspect that a periodic signal is masked by noise, we can calculate the autocorrelation function. If the autocorrelation function shows periodicity that persists for long shift times τ , a periodic signal is present. The period of the correlation function is the same as that of the signal. Acquisition of the data and calculation of the correlation function are done with digital techniques. Press et al. (1992) have an excellent discussion of the techniques and pitfalls.

FIGURE 11.38. An example of signal averaging. An evoked response is recorded along with the EEG from a scalp electrode. As the number of repetitions N is increased, the EEG background decreases and the evoked response stands out. Copyright c 2000 from L. T. Mainardi, A. M. Bianchi, and S. Cerutti (2000). Digital biomedical signal acquisition and processing, in J. D. Bronzino, ed. Biomedical Engineering Handbook. 2nd. ed. Boca Raton, FL, CRC Press. Vol. 1, pp. 53-1–53-25. Reproduced by permission of Routledge/Taylor & Francis LLC.

11.14.2Signal Averaging

If the period of a signal is known to be T , perhaps from the autocorrelation function or more likely because one is looking for the response evoked by a periodic stimulus, it is possible to take consecutive segments of the combined signal plus noise of length T , place them one on top of another, and average them. One can also do this for the response evoked by a stimulus. The signal will be the same in each segment, while the noise will be uncor-

related. After N sampling periods, the noise is reduced

by 1/ N .

Examples of this are the visual or auditory evoked response. The signal in the electroencephalogram (EEG) or magnetoencephalogram is measured in response to a flash of light or an audible click. (In other experiments the subject may perform a repetitive task.) The stimulus is repeated over and over while the signal plus noise is recorded and averaged. The average reproduces the shape

of the signal. Figure 11.38 shows an example of signal averaging for an evoked response in the EEG for increasing values of N .

The signal-averaging procedure can also be described in terms of a cross correlation with a series of δ functions at the stimulus times. Suppose a local signal l(t) is produced in synchrony with the stimulus. The cross correlation of l(t) with y(t) is

φyl(τ ) = [s(t) + n(t)] l(t + τ ) = φsl + φnl.

Whatever the local signal is, its cross correlation with the noise approaches zero for long averaging times, so

φyl(τ ) = φsl(τ ).

(11.75)

If the local signal is a series of narrow spikes approximated by δ functions, then

l(t) = δ(t) + δ(t − T ) + δ(t − 2T ) + · · · .

Since both s(t) and l(t) are periodic with the same period, the average can be taken over a single period. The integral then contains one δ function:

φyl(−τ ) = φsl(−τ ) =

1

T

s(t)δ(t − τ ) dt =

s(τ )

 

 

 

.

T

0

T

11.14 Correlation Functions and Noisy Signals

309

Fourier series

y (t) ak ,bk

Autocorrelation

φ11(τ ) Fourier series

(a) Periodic signal

Fourier transform

y (t)

Autocorrelation

Fourier transform

φ11(τ )

(b) Pulse signal

y (t)

Autocorrelation

Fourier transform

φ11(τ )

(c) Random signal

Φk = 12 (ak2 + bk2 )

Φ Power

k spectrum (discrete)

C(ω), S(ω )

Φ' = C 2 + S 2

Energy

Φ' (ω ) spectrum (continuous)

Power spectrum

Φ(ω ) (continuous)

11.14.3 Power Spectral Density

We have already seen that the Fourier transform of a random signal does not exist. Because the phases of a random signal are continually changing, we were unable to predict the future behavior of a time series in Fig. 11.36. If the signal is stationary, averages, including the average power, do not change with time and have meaning. The autocorrelation function of a random signal does exist, and so does the Fourier transform of the autocorrelation. If Φ(ω) is the Fourier transform of the autocorrelation function of a random signal, then

lim

1

T

y2(t) dt =

Φ(ω)

,

(11.76)

 

2π

T →∞ 2T

−T

−∞

 

 

 

and we can think of Φ as giving the power between frequencies f and f + df . This is called the Wiener theorem for random signals. The quantity Φ is often called the power spectral density or PSD. Figure 11.39 summarizes how the power or energy spectrum can be obtained for a periodic signal, a pulse, and a random signal.

In the digital realm there are several ways to calculate the power spectral density.7 The Blackman–Tukey method makes a digital estimate of the correlation function and takes its discrete Fourier transform, as described in Fig. 11.39(c). The periodogram uses the discrete Fourier transform directly. Though the Fourier

7See Press et al. (1992), Cohen (2000), or Mainardi et al. (2000).

FIGURE 11.39. The relationships between the power spectrum or energy spectrum and (a) a periodic signal, (b) a pulse,

(c) a random signal. The Fourier transform and series are bidirectional; the other processes are not.

transform of a random signal does not exist because of the randomly changing phases, the sum of the squares of the coe cients is stable. In fact, we plotted Φk calculated from the discrete Fourier transform in Fig. 11.36(b). Figure 11.40 shows both ways of calculating Φ(f ) for a surface electromyogram—the signal from a muscle measured on the surface of the skin. Slight di erences can be seen, but they are not significant.

Figure 11.41 shows the power spectrum of an EEG signal and also the e ect of aliasing. The original signal has no frequency components above 40 Hz. Sampling was done at 80 Hz. A 50-Hz power frequency signal was added, and the Fourier transform shows a spurious response at 30 Hz. The second panel also shows the mirror-image power spectrum from 40 to 80 Hz that should be thought of as occurring at negative frequencies (the factor of 2 again).

11.14.4 Units

It is worth pausing to review the units of the various functions we have introduced. They become confusing because we have three di erent cases: a periodic signal that is infinite in extent, a pulse signal that is of finite duration, and a random-noise signal that is also infinite in extent

310 11. The Method of Least Squares and Signal Analysis

TABLE 11.5. Units used in the various functions in this chapter, assuming that y is measured in [unit].

Type of function

 

 

 

 

Signal

Expansion

Correlation

Power

or en-

 

coe cients

functions

ergy

 

Discrete periodic

 

 

Power [unit2]

y [unit]

ak , bk [unit]

φ [unit2]

Φk [unit2]

Pulse

 

 

Energy

 

y [unit]

C, S [unit s]

φ [unit2

Φ (ω)

[unit2

 

 

s]

s2]

 

 

 

 

Φ (ω)

 

 

 

[unit2 s]

Random

 

 

Power [unit2]

y [unit]

 

φ [unit2]

Φ(ω) [unit2 s]

 

 

 

Φ(ω)

 

 

 

[unit2]

 

FIGURE 11.40. The power spectrum from a surface electromyogram calculated two di erent ways. The top panel shows the Blackman–Tukey method, which is a fast Fourier transform of a digital estimate of the autocorrelation function. The lower panel is the sum of the squares of the coe cients in a direct fast Fourier transform of the discrete data. Copyright c 2000 from A. Cohen (2000). Biomedical signals: origin and dynamic characteristics; frequency-domain analysis, in J. D. Bronzino, ed. Biomedical Engineering Handbook, 2nd. ed. Boca Raton, FL, CRC PPress. Vol. 1, pp. 52-1–52-24. Reproduced by permission of Routledge/Taylor & Francis Group, LLC.

age, then the power dissipated in resistance R is v2/R in watts. Our “power” defined from the equations above would be just v2.

Suppose that the signal y is measured in “units.” Then the “power” is in (units)2 and the “energy” for a pulse is in (units)2 s. The correlation functions for the infinite signals are in (units)2 while those for pulses are in (units)2 s. Table 11.5 summarizes the situation.

FIGURE 11.41. The power spectrum of an electroencephalogram signal showing the problem with aliasing, and also the presence of negative frequencies appearing as positive frequencies above the Nyquist frequency. Copyright c 2000 from L. T. Mainardi, A. M. Bianchi, and S. Cerutti. Digital biomedical signal acquisition and processing, in J. D. Bronzino, ed.

Biomedical Engineering Handbook. 2nd. ed. Boca Raton, FL, CRC Press. Vol. 1, pp. 53-1–53-25. Reproduced by permission of Routledge/Taylor & Francis Group, LLC.

but not periodic. For both signals that are infinite in extent we must use the “power,” and for the pulse we must use “energy.”

Often in signal analysis the units of “power” and “energy” may not be watts or joules. If the signal is a volt-

11.15Frequency Response of a Linear System

Chapter 10 discussed feedback in a linear system in terms of the solution of a di erential equation that described the response of the system as a function of time. The simplest system treated there was described by Eq. 10.20:

τ1

dx

+ x = ap(t) + G1y(t).

(11.77)

dt

 

 

 

Function p(t) is the input signal. This equation was combined with Eq. 10.21 to obtain

dx

+ (1 − G1G2)x = ap(t).

(11.78)

τ1 dt

It is often useful to characterize the behavior of a system by its response to sine waves of di erent frequencies instead of by its time response. The most familiar example is the audio amplifier: the output signal x(t) is some function of an input signal p(t) that is seldom a pure sinusoid. An equation analogous to Eq. 11.78 relates x and p. The amplifier is usually described as having “a frequency response of 0.5 dB at 10 Hz and 30 kHz.” It is easy to feed a sinusoidal signal of di erent frequencies into the amplifier and measure the amplitude ratio of the

output sine wave to the input sine wave.8 To describe the amplifier completely, it is also necessary to measure the phase delay or the time delay at each frequency. The combination of amplitude and phase response is called the transfer function of the amplifier.

In principle, once the properties of a linear system are known, either in terms of a di erential equation or the transfer function, its response to any input can be calculated. In the time domain, one solves the di erential equation with input p(t) on the right-hand side. In the frequency domain, one computes the Fourier transform of the input, makes the appropriate changes in amplitude and phase at every frequency according to the transfer function, and takes the inverse Fourier transform of the result. The inverse transform gives the output response as a function of time. Sometimes the di erential equation may be impossible to solve analytically or the inverse Fourier transform cannot be obtained, and numerical solutions are all that can be obtained.

The frequency-response technique may be particularly useful if the system has several stages (a microphone, an amplifier, one or more loudspeakers); one can multiply the amplitudes and add the phases of each stage.

If the di erential equation is known, the frequency response can be calculated. Conversely, if the frequency and phase responses are known, the di erential equation can be deduced. We give an example of the former approach in this section. The latter technique requires more mathematics than we have developed.

11.15.1Example of Calculating the Frequency Response

As an example of the frequency response method of describing the system, consider Eq. 11.78. With G2 = 0, the results apply to the case without feedback, Eq. 11.77. Let p(t) = cos ωt and a = 1. We want a solution of the form

x(t) = G(ω) cos(ωt − θ),

(11.79)

where G(ω) is the overall gain or amplitude ratio, and θ(ω) the phase shift, at frequency ω. We can show by substitution that Eq. 11.79 is a solution of Eq. 11.78 if

G(ω) =

1

 

 

1

 

1/2

 

 

 

 

,

1 − G1G2

1 + ω2τ12/(1 − G1G2)

 

 

tan θ =

ωτ1

 

 

 

 

.

 

 

 

1 − G1G2

 

(11.80) The behavior of the gain is plotted in Fig. 11.42, both without feedback (1 −G1G2 = 1) and with feedback (1 − G1G2 = 3). At low frequencies the gain is constant. It falls at high frequencies (ωτ1 1) as ω1. When ω = 11 (without feedback) or ω = 31 (with feedback),

11.15 Frequency Response of a Linear System

311

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

 

4

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

0.1

 

 

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

)

4

 

 

 

 

 

 

 

 

 

 

 

 

G(ω

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0.01

 

 

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

 

4

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

0.001

2

4

6

8

2

4

6

8

2

4

6

8

 

0.1

 

 

 

 

1

ωτ1

 

10

 

 

 

100

 

 

 

 

 

 

 

 

 

 

 

 

 

FIGURE 11.42. Plot of G(ω) for a system described by Eq. 11.80. Two cases are shown: without feedback (1 −G1G2 = 1) and with feedback (1 − G1G2 = 3). The dots mark the half-power frequencies (see text).

the gain is 1/ 2 times its value at zero frequency. This frequency is called the half-power frequency because the power is proportional to the square of the signal and its value at the half-power frequency is 1/2 times its value at zero frequency.

Negative feedback reduces the gain and also raises the half-power frequency from 11 to (1 − G1G2)1. The time constant is reduced by the feedback from τ1 to τ1/(1 − G1G2). Recall Eq. 10.23.

11.15.2The Decibel

The gain is often expressed in decibels9 (dB):

gain(dB) = 20 log10 G(ω).

(11.81)

A gain ratio of unity is equivalent to 0 dB; a gain of 1,000 is 20 log10(1.000) = 60 dB. One advantage to expressing gain in decibels is that the gains in dB for several stages add. If the first process has a gain of 2 (6 dB) and the second has a gain of 100 (40 dB), the overall gain is 200 (46 dB). For the amplifier whose gain has fallen by 0.5 dB at 10 Hz and 30 kHz, the ratio G(ω)/Gmax is given by solving

0.5 = 20 log10(G/Gmax),

G/Gmax = 100.025 = 0.944.

The gain has fallen to 94.4% of its maximum value at 10 Hz and 30 kHz. If the maximum gain were 1,000 (60 dB),

8The technique works only for a linear system. If the system is not linear, the output will not be sinusoidal.

9The bel is the logarithm to the base 10 of the power ratio. The decibel is one-tenth as large as the bel. Since the power ratio is the square of the voltage ratio or gain, the factor in Eq. 11.81 is 20.

312 11. The Method of Least Squares and Signal Analysis

then the gain would have fallen to 944 (59.5 dB) at 10 Hz and 30 kHz.

The fall in gain is called the roll-o , in this case the high-frequency roll-o . At high frequencies the gain is proportional to 1, so it drops by a factor of 2 (6 dB) when the frequency doubles (1 octave). Therefore the gain has a high-frequency roll-o of 6 dB per octave. A roll- o of 6 dB per octave is characteristic of systems with a single time constant, as in Eq. 11.78.

11.15.3 Example: Impulse Response

As an example we show that the response of the system to a δ function calculated in the time domain is consistent with the frequency response. Let the input be p(t) = δ(t). The Fourier transform of the input is

Cin(ω) = δ(t) cos ωt dt = 1,

−∞

Sin(ω) = δ(t) sin ωt dt = 0.

The value A is obtained by integrating the equation from to as 0:

τ1

 

dx

dt +

 

x dt =

δ(t) dt.

 

 

dt

 

The first term is x( ) − x() → x(0) 0. The second term vanishes in the limit, since x is finite and the width goes to zero. From the definition of the δ function the right-hand side of the equation is 1. Therefore

$

0,

t < 0

(11.83)

x =

(11)e−t/τ1 ,

t > 0.

 

 

The Fourier coe cients of this function were calculated in Eqs. 11.59. They are

C(ω) =

1

 

,

1 + ω2τ 2

 

1

 

 

S(ω) =

ωτ1

 

 

.

1 + ω2τ 2

 

1

 

 

−∞

The δ function contains constant power at all frequencies. The sine coe cients are zero because a δ function at t = 0 is an even function. The gain and phase delay are applied to C(ω) to get the Fourier transform of the output signal. Although we started with a purely even function (only cosine terms) the phase shift means that the output contains both sine and cosine terms. To calculate the output, we write Eq. 11.79 as

x(t) = [G(ω) cos θ cos ωt + G(ω) sin θ sin ωt] dω,

from which

Cout (ω) = G(ω) cos θ,

Sout(ω) = G(ω) sin θ.

From Eq. 11.80 we get (letting G2 = 0 and doing a fair amount of algebra)

These agree with Eqs. 11.82. We have demonstrated that the response of this particular linear system to a δ function is the Fourier transform of the transfer function of the system.

Although the system is not linear, one can see the frequency response of a physiological system in Figure 11.43. Glucose was administered intravenously to two subjects in a sinusoidal fashion with a period of 144 min, as shown in the top panel. The middle panel shows the resulting insulin secretion rate in a normal subject. Insulin secretion adjusts rapidly to the changing glucose level, and the normalized spectral power density has a peak at a period of 144 min. (Note that the spectrum is plotted vs. period, not frequency.) The bottom panel shows the results for a subject with impaired glucose tolerance (diabetes). There are oscillations in the insulin secretion rate, but they are irregular and at a shorter period, as can be seen in the normalized spectral power density.

Cout(ω) =

1

 

,

1 + ω2τ 2

 

1

 

(11.82)

 

ωτ1

Sout(ω) =

 

 

.

1 + ω2τ 2

 

1

 

 

It is easier to solve the di erential equation, take the Fourier transform of the solution, and compare it to Eq. 11.82 than it is to find the inverse transform with the mathematical tools at our disposal. For G2 = 0 the equation to be solved is

dx

τ1 dt + x = δ(t).

For all positive t a steady-state solution is x(t) = 0. The solution of the homogeneous equation is x(t) = Ae−t/τ1 .

11.16The Frequency Spectrum of Noise

In Sec. 9.8 we introduced Johnson noise and shot noise. Both are inescapable. Johnson noise arises from the Brownian motion of charge carriers in a conductor; shot noise arises from fluctuations due to the discrete nature of the charge carriers.

11.16.1Johnson Noise

When we introduced Johnson noise we said nothing about its frequency spectrum. We used the equipartition theorem to argue that since the energy on a capacitor depends on the square of the voltage, there would be fluctuations

{[C(ω) + ωτ1S(ω)] cos ωt

FIGURE 11.43. An example of the frequency response of a system. Glucose was administered intravenously to two subjects in a sinusoidal fashion with a period of 144 min, as shown in the top panel. The responses of the two subjects are discussed in the text. From K. S. Polonsky, J. Sturis, and G. I. Bell (1996). Non-insulin-dependent diabetes mellitus—A genetically programmed failure of the beta cell to compensate for insulin resistance. New Engl. J. Med. 334(12): 777–783. Modified from N. M. O’Meara et al. (1993). Lack of control by glucose of ultradian insulin secretory oscillations in impaired glucose tolerance and in non-insulin-dependent diabetes mellitus. J. Clin. Invest. 92: 262–271. Used by permission of the

New England Journal of Medicine and the Journal of Clinical Investigation.

in a capacitor whose average voltage is zero given by (in the notation of this chapter)

1

C

3v24

=

1

kB T.

(11.84)

2

2

(In this section we will use T both for time and, when immediately following the Boltzmann constant, for temperature. We also have, briefly, C for capacitance as well as for the Fourier cosine coe cient. We will eliminate the use of C for capacitance as much as possible.)

If the capacitor is completely isolated the charge on its plates, and hence the voltage between them, cannot fluctuate. The equipartition theorem applies to the capacitor only when it is in thermal equilibrium with its surroundings. This thermal contact can be provided by a resistor R between the plates of the capacitor. It is actually the Brownian movement of the charge carriers in this resistor that cause the Johnson noise. In analyzing

11.16 The Frequency Spectrum of Noise

313

 

R

 

 

+

i

 

+

e

 

 

v

Ð

 

C

Ð

 

 

 

FIGURE 11.44. The circuit for analyzing the noise produced by a resistance R connected to capacitance C. The circuit assumes that the noise is generated in a voltage source e(t) in series with the resistance. The voltage across the capacitance is v.

the noise in electric circuits, it is customary to imagine that the noise arises in an ideal voltage source: a “battery” that maintains the voltage across its terminals— fluctuating randomly with time—regardless of how much current flows through it. It is placed in series with the resistor. This is not a real source. It is a fictitious source that gives the correct results in circuit analysis. We call the voltage across this noise source e(t) and we want to learn about its properties.

Imagine that we place the noise source and its associated resistor across the plates of a capacitor, as shown in Fig. 11.44. We want to relate the voltage across the capacitor, v, to the voltage across the noise source, e. We know that e(t) = v(t) + Ri(t), and that i = Cdv/dt. (See the discussion surrounding Eqs. 6.36 and 6.37.) Therefore

e(t) = v(t) + RC

dv

= τ1

dv

+ v.

(11.85)

dt

dt

 

 

 

 

(By introducing τ1 = RC we eliminate the need to use C for capacitance until the very end of the argument. We use the subscript on τ1 to distinguish it from the argument of the correlation function.)

Even though the voltage is random, let us assume we can write it as a Fourier integral. Our final results depend only on the power spectrum and not on the phases. We write

v(t) =

1

[C(ω) cos ωt + S(ω) sin ωt] dω. (11.86)

 

 

2π −∞

Di erentiating this gives an expression for dv/dt:

dv

=

1

 

 

−∞ [−ωC(ω) sin ωt + ωS(ω) cos ωt] dω.

dt

2π

 

 

(11.87)

Combining these with Eq. 11.85 gives us the Fourier transform of e(t):

e(t) =

1

2π −∞

+[S(ω) − ωτ1C(ω)] sin ωt} dω

1

=[α (ω) cos ωt + β(ω) sin ωt)] dω.

2π −∞

Φe = 4RkB T

314 11. The Method of Least Squares and Signal Analysis

We now need to calculate 3v2(t)4 and 3e2(t)4. The calculation is exactly the same as what we did to derive Parseval’s theorem, in Eqs. 11.63–11.66, except that we are dealing with random signals instead of pulses and we have to introduce

1

lim

T →∞ 2T

on each side of the equation. When we do this, we find

3v2(t)4 =

 

1

 

C2(ω) + S2(ω)

 

 

 

 

 

 

 

2π

−∞

 

 

 

 

 

 

 

 

 

 

 

 

=

 

1

 

Φv (ω) dω,

 

 

 

 

 

 

 

 

 

 

2π

 

−∞

 

(11.88)

 

 

 

 

 

 

 

 

 

3e2(t)4

=

 

1

α2(ω) + β2(ω)

 

 

 

 

 

 

2π

 

−∞

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

1

Φe(ω) dω.

 

 

 

 

 

 

 

 

 

2π

 

−∞

 

 

 

 

 

 

 

 

 

 

 

 

If we expand Φe, we find that

Ð1

10-19

 

Φe

)

10-6

 

erms

10-20

 

Ð1/2

10-7

 

)

 

 

 

 

 

 

 

 

 

W Hz

10-21

 

 

 

(V Hz

10-8

 

vrms

(J, or

1/f

Φ

 

rms

1/f

 

-22

v

10

-9

 

10

 

 

ore

 

 

 

Φ/R

 

 

 

 

rms

 

 

 

 

 

10-23

 

 

 

v

10-10

 

 

 

102

 

 

104

 

 

 

100

 

 

 

100

102

104

 

 

f', Hz

 

 

 

 

 

f, Hz

 

FIGURE 11.45. The power spectrum of the noise source e and the voltage across the capacitor v. The left panel plots Φ/R vs f . The right panel plots vrms in each frequency interval. The parameters are described in the text.

The units of Φe are V2 s or V2 Hz1. This is for frequencies that extend from −∞ to . If we were dealing with only positive frequencies, we would have

Φe(ω) = α2(ω) + β2(ω)

 

= C2(ω) + S2(ω) (1 + ω2τ12)

 

= Φ

(ω)(1 + ω2τ 2).

(11.89)

v

1

 

Johnson noise was discovered experimentally by J. B. Johnson in 1926. The next year Nyquist explained its origin using thermodynamic arguments and showed that until one reaches frequencies high enough so that quantummechanical e ects are important, Φe is a constant independent of frequency [Nyquist (1928)]. We will not reproduce his argument; rather we will assume that Φe is a constant and find the value of Φe for which the mean square voltage across the capacitor satisfies the equipartition theorem, Eq. 11.84.

The expression for Φv becomes

 

 

 

 

Φv (ω) =

 

Φe

 

,

 

 

(11.90)

 

 

 

 

1 + ω2τ 2

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

and from the first of Eqs. 11.88,

 

 

 

 

 

3v2(t)4

=

 

1

Φv (ω) =

 

Φe

 

(11.91)

2π

2π

1 + ω2

τ 2

 

 

−∞

 

−∞

 

 

 

 

 

 

 

 

1

 

=Φe dx

2πτ1 −∞ 1 + x2

=Φe tan1() tan1(−∞) = Φe .

2πτ1 2τ1

Putting this expression in the equipartition statement, Eq. 11.84, and remembering that τ1 = RC, we obtain

C 3v2(t)4

=

1

C

Φe

 

=

kB T

,

2

2

2RC

 

 

 

2

 

Φe = 2RkB T.

 

 

(11.92)

(using positive frequencies only). (11.93)

Either way, this says that the power spectrum for the fictitious source e(t) is constant so there is equal power at all frequencies (up to the limits imposed by quantum mechanical e ects). For this reason, Johnson noise is called white noise, in analogy with white light that contains all frequencies. The voltage fluctuations across the capacitor have the power spectrum

 

2RkB T

 

, −∞ < ω < ∞

 

1 + ω2τ

2

 

 

1

 

(11.94)

Φv (ω) =

 

 

 

 

 

4RkB T

 

, 0 < ω < .

 

2

 

2

 

 

 

1 + ω

τ1

 

Figure 11.45 shows the Johnson-noise power spectra and rms voltage spectra plotted vs frequency. These are based on T = 300 K, R = 106 Ω , C = 109 F, and τ1 = RC = 103 s. The labels on the ordinates are worth discussion. On the left we have Φ/R, which from Eq. 11.94 is in joules, which is W s or W Hz1. The units for the graph on the right that are consistent with this are W1/2 s1/2 = W1/2 Hz1/2 = V Ω1/2 Hz1/2. The resistance has been included to make the units V Hz1/2. The 1/f 2 fallo at high frequencies is due to the frequency response of the RC circuit and is not characteristic of the noise.

Figure 11.46 shows an example: the spectral density of the magnetic field from an article on the magnetoencephalogram. The units are femtotesla Hz1/2 (1 femtotesla = 1 fT = 1015 T).

We can determine the autocorrelation functions Φee(τ ) and Φvv (τ ). Equation 11.72 gave the Fourier transform of the autocorrelation function for a pulse. For a random signal the autocorrelation is very similar but involves the

FIGURE 11.46. Spectral density of various sources of the magnetic field, expressed in terms of the magnetic field in femtotesla (1 fT = 1015 T). Reprinted with permission from M. H¨am¨al¨ainen, R. Harri, R. J. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa. Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 65(2):413–497. 1993. Copyright 1993 by the American Physical Society.

power instead of the energy:

1

φee(τ ) = 2π −∞Φe(ω) cos ωt dω,

1

(11.95)

φvv (τ ) = 2π −∞Φv (ω) cos ωt dω.

For the voltage source the autocorrelation function is

φee(τ ) =

2RkB T

cos ωt dω.

(11.96)

2π

 

−∞

 

 

 

 

To evaluate this, consider Eq. 11.65a, which shows the Fourier transform of the δ function. The integral there is over time. Interchange the time and angular frequency variables to write

cos ωτ cos ωτ dω = 2πδ(τ − τ ).

(11.97)

−∞

Let τ = 0:

cos ωτ dω = 2πδ(τ ).

(11.98)

−∞

The final expression for the autocorrelation function of the noise source is

Φee(τ ) = 2RkB T δ(τ ).

(11.99)

To find φvv (τ ), consider the discussion surrounding Eqs. 11.69 and 11.70. There we discussed the Fourier transform pair (letting a = 11)

 

A2

Fourier transform

A2

 

 

←−−−−−−−−−−−−−−→

 

e−|τ |1 , (11.100)

1 + ω2τ12

2τ1

11.16 The Frequency Spectrum of Noise

315

from which we obtain the autocorrelation function for the voltage across the capacitor:

Φvv (τ ) =

RkB T

e−|τ |1 .

(11.101)

 

 

τ1

 

Let us compare these two results. The autocorrelation of the noise source is a δ function. Any shift at all destroys the correlation. The noise equivalent voltage source and resistor, isolated from anything else, respond instantaneously to random noise changes, the correlation function is infinitely narrow, and all frequencies are present. When the source and resistor are connected to a capacitor, the voltage across the capacitor cannot change instantaneously. There is a high-frequency roll-o , and the voltage at one time is correlated with the voltage at surrounding times. As the time constant of the circuit becomes smaller, φvv (τ ) becomes narrower and taller, approaching the δ function.

The power spectrum across the capacitor has the same form as the square of the magnitude of the gain (transfer function) of Eq. 11.80. This is the transfer function for an RC circuit, as can be seen by comparing Eq. 11.78 with Eq. 11.85. This is a special case of a general result, that linear systems can be analyzed by measuring how they respond to white noise.

11.16.2Shot Noise

Chapter 9 also mentioned shot noise, which occurs because the charge carriers have a finite charge, so the number of them passing a given point in a circuit in a given time fluctuates about an average value. One can show that shot noise is also white noise.

11.16.31/f Noise

Johnson noise and shot noise are fundamental and independent of the details of the construction of the resistance. The former depends on the Brownian motion of the charge carriers, and the latter depends on the number of charge carriers required to transport a given amount of charge. They are irreducible lower limits on the noise (for a given resistance and temperature). If one measures the noise in a real resistor in a circuit, one finds additional or “excess” noise that can be reduced by changing the materials or construction of the resistor. This excess noise often has a 1/f frequency dependence. For white noise the power in every frequency interval is proportional to the width of the interval, so there is 10 times as much power in the frequency decade from 10 to 100 Hz as in the decade from 1 to 10 Hz. For 1/f noise, on the other hand, there is equal power in each frequency decade. This kind of noise is sometimes called “pink noise” in allusion to the fact that pink light has more power in the red (lower frequency) part of the spectrum than the rest.

Noise with a 1/f spectrum had been discovered in many places: resistors, transistors, and the fluctuations

316 11. The Method of Least Squares and Signal Analysis

in the flow of sand in an hourglass, in tra c flow, in the heartbeat, and even in human cognition. It is thought that there might be some universal principle underlying 1/f noise, possibly related to chaos, but this is still an area of active investigation.

11.17Testing Data for Chaotic Behavior

A major problem in data analysis is to find the meaningful signal due to the physical or biological process in the presence of noise. We have introduced some of the analysis techniques in this chapter. A problem that has only become important in recent years is to determine whether a variable that is apparently random is due to truly random behavior in the underlying process or whether the process is displaying chaotic behavior. The techniques for determining this are still under development and are beyond the scope of this book. An excellent introduction is found in Chapter 6 of Kaplan and Glass (1995). We close by mentioning two of the tools used in this analysis: embedding and surrogate data.

One of the problems in analyzing data from complex systems is that we may not be able to measure all of the variables. For example, we may have the electrocardiogram or even an intracellular potential recording but have no information about the details of the ionic currents of several species through the membrane that change the potential. We may measure the level of thyroid hormones T3 and T4 but have no information about the other hormones in the thyroid–hypothalamus–pituitary feedback system. Fortunately, we do not need to measure all the variables. There is a data-reduction technique that can be applied to a few of the variables that shows the dynamics of the full system.

A series of measurements at di erent times gives us information about how function f1 depends on x. A remarkable result that we state without proof is that it also gives information about the entire system. [See Kaplan and Glass (1995) for a more detailed discussion and references to the literature.] Figure 11.47 shows this in a specific case. It is a calculation using the van der Pol oscillator. This nonlinear oscillator has been used to model many systems since it was first proposed in the 1920s. It can be written as the pair of first-order equations

dx

1

 

 

x3

 

dy

 

 

=

 

 

y −

 

+ x

,

 

 

= −ax,

dt

a

 

3

 

dt

where a is a very small positive number. The top panel of Fig. 11.47 shows values of xj vs j (labeled as Dt vs t). The middle panel shows a phase-plane plot of y vs x. The bottom panel plots xj+10 vs xj . Shading is used to identify some of the early data points in all three panels. The trajectory in the bottom panel has all the same characteristics as the phase-plane plot.

This is an example of a general technique called timelag embedding. The set of di erential equations with two degrees of freedom has been converted into a nonlinear map in one degree of freedom.

For a system with three degrees of freedom, we could make a three-dimensional plot by creating sets of three numbers from the n measured values, which we can think of and plot as the three components of a vector

xj = (xj , xj−h, xj−2h), j = 2h, 2h + 1, . . . , n − 1.

In general, we can construct a p-dimensional set of vectors

xj = (xj , xj−h, . . . xj−ph),

j = ph, . . . , n − 1.

We call p the embedding dimension and h the embedding lag. There are a number of further calculations that can be done to the embedded vector to help decide on the behavior of the underlying system. These are described in Kaplan and Glass (1995).

11.17.1 Embedding

To see how embedding works, consider a system with two degrees of freedom described by a set of nonlinear di erential equations with the form of Eqs. 10.32. In order to make the subscript on x available to index measurements of the variable at di erent times, we write the variables as x and y instead of x1 and x2:

dx

= f1(x, y),

dy

= f2(x, y).

dt

 

dt

 

 

A phase-space plot would be in the xy plane. Suppose we only measure variable x, and that we obtain a sequence of measurements xj = x(tj ). The time derivative is approximately

xj+h − xj

 

dx

= f

(x, y).

th+j − tj

dt

1

 

11.17.2Surrogate Data

In general, a fully conclusive answer to the question of whether the data are due to a random process or a chaotic process cannot be obtained, though strong indications can be. The most rigorous way to test for the presence of chaotic behavior is to make the hypothesis—called the null hypothesis–that the data are explained by a linear process plus random noise. One then develops a test statistic (several standard tests are used) and compares the value of the test statistic for the real data to its value for sets of data that are consistent with the null hypothesis. These sets are called surrogate data. We examined one linear system with noise: the random walk of Fig. 11.36. The next value in the sequence was the previous value plus random noise. We saw that the power spectrum was defined, but the phases changed randomly. We