Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Intermediate Physics for Medicine and Biology - Russell K. Hobbie & Bradley J. Roth

.pdf
Скачиваний:
91
Добавлен:
10.08.2013
Размер:
20.85 Mб
Скачать

11.18 Stochastic Resonance

317

FIGURE 11.47. Plots of the solution to the van der Pol equation with a certain set of initial conditions. The top panel shows values of xj vs j (labeled as Dt vs t). The middle panel is a phase-plane plot of y vs x. The bottom panel plots xj+10 vs xj . Shading is used to identify some of the early data points in all three panels. The trajectory in the bottom panel has the same characteristics as the phase-plane plot. Used by permission from D. Kaplan and L. Glass. Understanding Nonlinear Dynamics. New York, Springer-Verlag, 1995.

can think of any linear system driven by random noise as having a defined transfer function G(ω) with random phases. Therefore, one can generate sets of surrogate data by taking the transform of the original data in the form of an amplitude and phase, related to C and S by Eq. 11.13. One then randomizes the phases and calculates the inverse Fourier transform of the randomized coe cients to generate the surrogate data sequence. The surrogate data have the same power spectrum and autocorrelation function as the original data. One then applies the various test statistics. If we were to do this to the data from Fig.

FIGURE 11.48. A sine wave of unit amplitude drives a threshold detector. A spike is generated every time the signal rises through 0.9.

11.36, we would find the tests the same for the original data and the sets of surrogate data, because the original data set is consistent with the null hypothesis.

11.18 Stochastic Resonance

A nonlinear phenomenon called stochastic resonance has been recognized in recent years. In stochastic resonance, random fluctuations increase the sensitivity to detect weak signals or allow some other desirable process to take place, such as ion transport. Stochastic resonance takes many forms. It was first invoked in 1981 to explain why the earth has periodic ice ages.10 It has been proposed as a mechanism in biological processes, but the models are rather complicated.11We discuss two simple physical examples.

11.18.1Threshold Detection

In a linear system, any amount of noise decreases the signal-to-noise ratio. In a non-linear system, weak noise can enhance signal detection. The simplest non-linear system that shows this is a threshold detector: an output signal is generated when the input (signal + noise) exceeds a fixed threshold.

Suppose that a sine-wave signal is sent to a threshold detector. Every time the signal rises above the threshold, a pulse is generated, as shown in Fig. 11.48. The output signal is a series of pulses spaced by T , the period of the sine wave. Problem 24 shows that for a series of

10References can be found in the articles by Wiesenfeld and Jaramillo (1998) and by Astumian and Moss (1998).

11See Astumian (1997); Astumian and Moss (1998); Wiesenfeld and Jaramillo (1998); Gammiatoni et al. (1998);Adair, Astumian and Weaver (1998); Glass (2001).

318 11. The Method of Least Squares and Signal Analysis

30x10-3

25

20

Power

15

 

 

10

0

1

2

k

3

4

5

 

 

 

 

 

 

FIGURE 11.49. Power spectrum for a train of rectangular pulses of width 2d when d/T = 1/20.

FIGURE 11.50. Stochastic resonance. (a) The two curves show the sinusoidal signal and the combination of Gaussian noise plus signal. The latter occasionally exceeds the threshold value shown by the straight line. (b) The pulses generated when the combination of signal plus noise rises above threshold.

(c) The averaged power spectrum of the pulse train. From Z. Gingl, L. B. Kiss and F. Moss. Nondynamical stochastic resonance: Theory and experiments with white and arbitrarily coloured noise. Europhys. Lett. 29(3): 191–196 (1995). Used by permission.

pulses of width 2d separated by time T , the power at frequency 0 = 2πk/T is Φk = (22k2) sin2 (2πkd/T ). This power spectrum is plotted in Fig. 11.49.

If the amplitude of the sine wave in Fig. 11.49 is less than 0.9, the threshold will never be exceeded. However, if su cient noise is added to a sine wave that is below threshold, the signal and noise combined will occasionally exceed the threshold. This will happen more frequently when the sine-wave signal is positive than when it is neg-

FIGURE 11.51. The results of an electronics experiment and a theoretical calculation of threshold detection. One curve shows the square of the output sinusoidal signal, Ps. The other shows the signal-to-noise ratio. From Z. Gingl, L. B. Kiss, and F. Moss. Non-dynamical stochastic resonance: Theory and experiments with white and arbitrarily coloured noise. Europhys. Lett. 29(3): 191–196 (1995). Used by permission.

ative so output pulses will occur more frequently during peaks of the signal.

Experiments were done with an electronic circuit that behaves as we have described. The results are shown in Figures 11.50 and 11.51. Figure 11.50 shows the weak sinusoidal signal with and without the noise added to it, along with the resulting pulses and the power spectrum. Figure 11.51 shows the power in the pulse train at the signal frequency and the signal-to-noise ratio, as a function of noise level. The amplitude of the sine wave is 0.1 V. As the noise level increases, both the signal and the SNR increase, reach a maximum, and decrease. The signal-to- noise ratio peaks when the rms noise level is about 0.25 V; the power at the signal frequency peaks at about 0.3 V. As the noise increases above these values the SNR and signal decrease. The lines are theoretical fits; both the theory and the data are described by Gingl et al. (1995).

11.18.2 Feynman’s Ratchet

Perpetual motion machines violate either the first or second law of thermodynamics (or both). In his Lectures on Physics, Richard Feynman (1963) analyzed a microscopic cog wheel (ratchet) and pawl as shown in Fig. 11.52. Feynman’s analysis is elegant, full of insight, and well worth reading. The analysis here follows that in Astumian and Moss (1998). An amount of energy ∆U is required to compress the spring enough to lift the pawl over the tooth. This energy can come either from an imbalance of the molecular bombardment of the paddle wheel at temperature T1, or from molecular bombardment of the pawl spring, which is at temperature T2. Clockwise rotation will result if the pawl rides up the ramped side of the ratchet and will occur with a probability proportional to eU/kB T1 ; counterclockwise rotation requires energy

Symbols Used

319

FIGURE 11.52. Feynman’s ratchet. (a) A cog wheel is attached to a paddlewheel in a reservoir at temperature T1. A pawl is attached to a spring located in a reservoir at temperature T2. (b) The net rate of clockwise motion vs. T = (T1 + T2)/2. The details are discussed in the text. Reproduced by permission from Astumian, R. D. and F. Moss. Overview: The constructive role of noise in fluctuation driven transport and stochastic resonance. Chaos. 8(3): 533–538. (1998). Copyright 1998 American Institute of Physics.

transfer to the pawl spring, with a probability proportional to eU/kB T2 . With T1 = T +∆T and T2 = T −T , one can show (see Problem 38), that the net rate is

net rate

2∆U T

eU/kB T .

(11.102)

kB T 2

Fig. 11.52b plots the net rate for the parameters ∆U = 0.05 eV and ∆T = 10 K .While thermal gradients are not found in the body, Astumian and Moss show that particles in similar asymmetrically-shaped potentials can be driven by having the barrier height vary randomly with time.

Symbols Used in Chapter 11

h

Small quantity

 

288

h

Shift index

 

316

i

 

 

 

292

1

 

i

Current

A

296

j

Index, usually denoting a data point

 

285

k

Index denoting terms in a sum

 

285

kB

Boltzmann constant

J K1

313

l, m

Particular values of index k

 

290

l

Local signal

 

309

n

Maximum value of index k

 

290

n

Noise

 

308

p

Parameter or input signal

 

310

p

Dimension of a vector

 

316

s

Signal

 

308

t

Time

s

289

v

log y

 

288

v

Voltage

V

296

x

Independent variable

 

285

x

Vector of data points

 

316

y

Dependent variable

 

285

A

Amplitude

 

289

C, Ck

Amplitude of cosine term

 

289

C

Capacitance

F

313

G

Gain

 

310

N

Number of data points

 

286

Q

Goodness of fit or mean square

 

286

 

residual

 

 

R

Residual

 

292

R

Resistance

296

S, Sk

Amplitude of sine term

 

289

Sxx, Sxy Sums of residuals and their products

 

287

T

Period

s

289

T

Temperature

K

313

U

Energy

J

318

Y, Yk

Complex Fourier transform or series

 

292

 

of y

 

 

α

Fourier coe cient in autocorrelation

 

301

 

function

 

 

α, β

Fourier coe cients

V

313

 

 

 

 

Hz1/2

 

δy Uncertainty in y

δDelta function

Error

Small number (limit of integration)

φ, θ

Phase

φCorrelation function

τShift time

τ1

Time constant

ω, ω0

Angular frequency

Φk

Power at frequency 0

Φ(ω)

Power in frequency interval

Φ (ω)

Energy in frequency interval

 

Time average

288

304

297

312

289

299

s299

s310

s1

289

 

298

 

309

305

297

Symbol Use

Units

aCoe cient in polynomial fit

aSlope

aCoe cient of even (cosine) term

aParameter in exponential

aArbitrary constant

bIntercept

bParameter in exponential

bCoe cient of odd (sine) term

e

Noise voltage source

V

f, f0

Frequency

Hz

fFunction

First used on page 285 285 290 288 304 285 288 290 313 290 316

Problems

Section 11.1

Problem 1 Find the least squares straight line fit to the following data:

xy

02

15

28

311

320 11. The Method of Least Squares and Signal Analysis

Problem 2 Suppose that you wish to pick one number to characterize a set of data x1, x2, . . . , xN . Prove that the mean x, defined by

N

1

x = N j=1xj ,

minimizes the mean square error

N

Q = N1 (xj − x)2.

j=1

(c) Repeat parts (a) and (b) while rounding all the intermediate numbers to 4 significant figures. Do Eqs. 11.5a and 11.5b give the same result as Eq. 11.5c? If not, which is more accurate?

Problem 8 This problem is designed to show you what happens when the number of parameters exceeds the number of data points. Suppose that you have two data points:

xy

01

14

Problem 3 Derive Eqs. 11.5.

Problem 4 Suppose that the experimental values y(xj ) are exactly equal to the calculated values plus random noise for each data point: y(xj ) = ycalc(xj ) + nj . What is

Q?

Problem 5 You wish to fit a set of data (xj , yj ) with an expression of the form y = Bx2. Di erentiate the expression for Q to find an equation for B.

Problem 6 Assume a dipole p is located at the origin and is directed in the xy plane. The z component of the magnetic field, Bz , produced by this dipole is measured at nine points on the surface z = 50 mm .The data are

i

xi (mm)

yi (mm)

Bzi (fT)

1

50

50

154

2

0

50

170

3

50

50

31

4

50

0

113

5

0

0

0

6

50

0

113

7

50

50

31

8

0

50

170

9

50

50

154

The magnetic field of a dipole is given by Eq. 8.15, which in this case is

Bz =

µ0

pxyi

py xi

 

 

 

 

 

 

.

4π

(xi2 + yi2 + zi2)3/2

(xi2 + yi2 + zi2)3/2

Use the method of least squares to fit the data to the equation, and determine px and py .

Problem 7 Consider the data

x y

1004004

1014017

1024039

1034063

(a)Fit these data with a straight line y = ax + b using Eqs. 11.5a and 11.5b to find a.

(b)Use Eq. 11.5c to determine a. Your result should be the same as in part (a).

Find the best fits for one parameter (the mean) and two parameters (y = ax + b). Then try to fit the data with three parameters (a quadratic). What happens when you try to solve the equations?

Problem 9 The strength-duration curve for electrical stimulation of a nerve is described by Eq. 7.45: i = iR(1 + tC /t), where i is the stimulus current, iR is the rheobase, and tC is the chronaxie. During an experiment you measure the following data:

t (ms)

i (mA)

0.5

2.004

1.0

1.248

1.5

0.997

2.0

0.879

2.5

0.802

3.0

0.749

Determine the rheobase and chronaxie by fitting these data with Eq. 7.45. Hint: let a = iR and b = iRtC , so that the equation is linear in a and b : i = a + b/t. Use the linear least squares method to determine a and b. Plot i vs. t, showing both the theoretical expression and the measured data points.

Section 11.2

Problem 10 (a) Obtain equations for the linear leastsquares fit of y = Bxm to data by making a change of variables.

(b)Apply the results of (a) to the case of Problem 5. Why does it give slightly di erent results?

(c)Carry out a numerical comparison of Problems 5 and (b) with the data points

xy

13

212

327

Repeat with

xy

12.9

212.1

327.1

Problem 11 Consider the data given in Problem 2.36 relating molecular weight M and molecular radius R. Assume the radius is determined from the molecular weight by a power law: R = BM n. Fit the data to this expression to determine B and n. Hint: Take logarithms of both sides of the equation.

Problem 12 In Prob. 6 the dipole strength and orientation were determined by fitting the equation for the magnetic field of a dipole to the data, using the linear least squares method. In that problem the location of the dipole was known. Now, suppose the location of the dipole (x0, y0, z0) is not known. Derive an equation for Bz (px, py , x0, y0, z0) in this more general case. Determine which parameters can be found using linear least squares, and which must be determined using nonlinear least squares.

Section 11.4

Problem 13 Write a computer program to verify Eqs. 11.20–11.24.

Problem 14 Consider Eqs. 11.17–11.19 when n = N and show that all equations for m > N/2 reproduce the equations for m < N/2.

Problem 15 The secretion of the hormone cortisol by the adrenal gland is subject to a 24-hour (circadian) rhythm [Guyton (1991)]. Suppose the concentration of cortisol in the blood, K (in µg per 100 ml) is measured as a function of time, t (in hours, with 0 being midnight and 12 being noon), resulting in the following data:

tK

010.3

4

16.1

8

18.3

12

13.7

16

7.9

20

6.0

Problems

321

Problem 19 The following data from Kaiser and Halberg (1962) show the number of spontaneous births vs. time of day. Note that the point for 2300 to 2400 is much higher than for 0000-0100. This is probably due to a bias: if a woman has been in labor for a long time and the baby is born a few minutes after midnight, the birth may be recorded in the previous day. Fit these data with a 24-hr period and again including an 8-hr period as well. Make

a correction for the midnight bias.

 

Time

Births

Time

Births

0000-0100

23 847

1200-1300

24 038

0100-0200

28 088

1300-1400

22 234

0200-0300

28 338

1400-1500

21 900

0300-0400

28 664

1500-1600

21 903

0400-0500

28 452

1600-1700

21 789

0500-0600

27 912

1700-1800

21 927

0600-0700

27 489

1800-1900

21 761

0700-0800

26 852

1900-2000

21 995

0800-0900

26 421

2000-2100

22 913

0900-1000

26 947

2100-2200

23 671

1000-1100

26 498

2200-2300

24 149

1100-1200

25 615

2300-2400

27 819

Section 11.7

Problem 20 Suppose that y(x, t) = y(x − vt). Calculate the cross correlation between signals y(x1) and y(x2).

Problem 21 Calculate the cross-correlation, φ12, for the example in Fig. 11.20:

y1

$

+1,

0 < t < T /2

(t) =

1, T /2 < t < T

 

 

y2

(t) = sin

 

2πt

 

 

 

.

 

T

 

 

 

 

Both functions are periodic.

Fit these data to the function K = a + b cos (2πt/24) + c sin (2πt/24)using the method of least squares, and determine a, b, and c.

Problem 16 Verify that Eqs. 11.29 follow from Eqs. 11.27.

Problem 17 This problem provides some insight into the Fast Fourier Transform. Start with the expression for an N -point Fourier transform in complex notation, Yk in Eq. 11.29a. Show that Yk can be written as the sum of two N/2-point Fourier transforms: Yk = Yke + W k Yko, where W = exp (i2π/N ), superscript e stands for even values of j, and o stands for odd values.

Section 11.5

Problem 18 Use Eqs. 11.33 to derive Eq. 11.34.

Section 11.8

Problem 22 Fill in the missing steps to show that the autocorrelation of y1(t)is given by Eq. 11.49.

Problem 23 Consider a square wave of amplitude A and period T .

(a)What are the coe cients in a Fourier-series expan-

sion?

(b)What is the power spectrum?

(c)What is the autocorrelation of the square wave?

(d)Find the Fourier-series expansion of the autocorrelation function and compare it to the power spectrum.

Problem 24 The series of pulses shown are an approximation for the concentration of follicle-stimulating hormone (FSH) released during the menstrual cycle.

322 11. The Method of Least Squares and Signal Analysis

(a)Determine a0, ak , and bk in terms of d and T .

(b)Sketch the autocorrelation function.

(c)What is the power spectrum?

Problem 25 Consider the following simplified model for the periodic release of follicle-stimulating hormone (FSH). At t = 0 a substance is released so the plasma concentration rises to value C0. The substance is cleared so that C(t) = C0e−t/τ . Thereafter the substance is released in like amounts at times T , 2T , and so on. Furthermore,

τT .

(a)Plot C(t) for two or three periods.

(b)Find general expressions for a0, ak , and bk . Use the fact that integrals from 0 to T can be extended to infinity because τ T . Use the following integral table:

 

e−ax dx =

1

,

 

 

 

a

 

 

 

0

 

 

 

e−ax cos mx dx =

 

 

a

,

0

a2 + m2

e−ax sin mx dx =

 

m

 

.

0

a2 + m2

(c)What is the “power” at each frequency?

(d)Plot the “power” for k = 1, 10, 100 for two cases:

τ/T = 0.1 and 0.01. Compare the results to the results of Problem 24

(e)Discuss qualitatively the e ect that making the pulses narrower has on the power spectrum. Does the use of Fourier series seem reasonable in this case? Which description of the process is easier—the time domain or the frequency domain?

(f ) It has sometimes been said that if the transform for a given frequency is written as Ak cos(0t − φk) that φk gives timing information. What is φ1 in this case? φ2? Do you agree with the statement?

Problem 26 Calculate the autocorrelation function and the power spectrum for the previous problem.

Section 11.10

Problem 28 Prove that

δ(t) = δ(−t),

t δ(t) = 0,

δ(at) = a1 δ(t).

Section 11.11

Problem 29 Rewrite Eqs. 11.59 in terms of an amplitude and a phase. Plot them.

Problem 30 Find the Fourier transform of

$

f (t) =

1, −a ≤ t ≤ a,

0, everywhere else.

Problem 31 Find the Fourier transform of

y =

$ e−at sin ω0t,

t ≥ 0,

 

0,

t < 0.

Determine C(ω), S(ω), and Φ (ω) for ω > 0 if the term that peaks at negative frequencies can be ignored for positive frequencies.

Section 11.14

Problem 32 Here are some data.

t

y

t

y

t

y

 

 

 

 

 

 

1

1.18

13

1.84

25

0.43

2

1.39

14

5.01

26

0.91

3

0.67

15

0.75

27

1.32

4

1.38

16

0.90

28

1.92

5

0.76

17

0.42

29

0.57

6

5.23

18

3.68

30

2.30

7

1.31

19

4.15

31

1.09

8

2.63

20

1.45

32

0.71

9

1.03

21

2.44

33

1.72

10

4.62

22

4.44

34

4.22

11

1.98

23

0.08

35

3.20

12

0.47

24

2.34

36

1.69

(a)Plot them.

(b)If you are told that there is a signal in these data with a period of 4 s, you can group them together and average them. This is equivalent to taking the cross correlation with a series of δ functions. Estimate the signal shape.

Section 11.9

Problem 27 Calculate the Fourier transform of exp[(at)2] using complex notation (Eq. 11.57). Hint: complete the square.

Section 11.15

Problem 33 Verify that Eqs. 11.79 and 11.80 are solutions of Eq. 11.78.

Problem 34 Equation 11.80 is plotted on log–log graph paper in Fig. 11.42. Plot it on linear graph paper.

Problem 35 If the frequency response of a system were proportional to 1/ 1 + (ω/ω0)3 , what would be the high frequency roll-o in decibels per octave for ω ω0?

Problem 36 Consider a signal y = A cos ωt. What is the time derivative? For a fixed value of A, how does the derivative compare to the original signal as the frequency is increased? Repeat these considerations for the integral of y(t).

Section 11.16

Problem 37 Show that integration of Eq. 11.101 over all shift times is consistent with the integration of the δ function that is obtained in the limit τ1 0.

Section 11.18

Problem 38 Show that the net clockwise rate of rotation of the Feynman ratchet is given by Eq. 11.102.

References

Adair, R. K., R. D. Astumian, and J. C. Weaver (1998). Detection of weak electric fields by sharks, rays and skates. Chaos. 8(3): 576–587.

Anderka, M., E. R. Declercq, and W. Smith (2000). A time to be born. Amer. J. Pub. Health 90(1): 124–126.

Astumian, R. D. (1997). Thermodynamics and kinetics of a Brownian motor. Science. 276: 917–922.

Astumian, R. D., and F. Moss (1998). Overview: The constructive role of noise in fluctuation driven transport and stochastic resonance. Chaos. 8(3): 533–538.

Bevington, P. R., and D. K. Robinson (1992). Data Reduction and Error Analysis for the Physical Sciences, 2nd ed. New York, McGraw-Hill.

Blackman, R. B., and J. W. Tukey (1958). The Measurement of Power Spectra. AT&T. New York, Dover, pp. 32–33.

Bracewell, R. N. (1990). Numerical transforms. Science 248: 697–704.

Cohen, A. (2000). Biomedical signals: Origin and dynamic characteristics; frequency-domain analysis. In J. D. Bronzino, ed. The Biomedical Engineering Handbook, 2nd. ed. Vol. 1. Boca Raton, FL, CRC Press, pp. 52-1– 52-4.

Cooley, J. W., and J. W. Tukey (1965). An algorithm for the machine calculation of complex Fourier series.

Math. Comput. 119: 297–301.

DeFelice, L. J. (1981). Introduction to Membrane Noise. New York, Plenum.

References 323

Feynman, R. P., R. B. Leighton, and M. Sands (1963).

The Feynman Lectures on Physics, Vol. 1, Chapter 46. Reading, MA, Addison-Wesley.

Gammiatoni, L., P. H¨anggi, P. Jung, and F. Marchesoni (1998). Stochastic resonance. Rev. Mod. Phys. 70(1): 223–287.

Gatland, I. R. (1993). A weight-watcher’s guide to least-squares fitting. Comput. Phys. 7(3): 280–285.

Gatland, I. R., and W. J. Thompson (1993). Parameter bias estimation for log-transformed data with arbitrary error characteristics. Am. J. Phys. 61(3): 269–272.

Gingl, Z., L. B. Kiss, and F. Moss (1995). Non– dynamical stochastic resonance: Theory and experiments with white and arbitrarily coloured noise. Europhys. Lett. 29(3): 191–196.

Glass, L. (2001). Synchronization and rhythmic processes in physiology. Nature. 410(825): 277–284.

Guyton, A. C. (1991). Textbook of Medical Physiology,

8th ed. Philadelphia, Saunders.

Kaiser, I. H., and F. Halberg (1962). Circadian periodic aspects of birth. Ann. N. Y. Acad. Sci. 98: 1056– 1068.

Kaplan, D., and L. Glass (1995). Understanding Nonlinear Dynamics. New York, Springer-Verlag.

Lighthill, M. J. (1958). An Introduction to Fourier Analysis and Generalized Functions. Cambridge, England, Cambridge University Press.

Lybanon, M. (1984). A better least-squares method when both variables have uncertainties. Am. J. Phys. 52: 22–26.

Mainardi, L. T., A. M. Bianchi, and S. Cerutti (2000). Digital biomedical signal acquisition and processing. In J. D. Bronzino, ed. The Biomedical Engineering Handbook, 2nd. ed. Vol. 1. Boca Raton, FL, CRC, pp. 53-1–53-25.

Maughan, W. Z., C. R. Bishop, T. A. Pryor, and J. W. Athens (1973). The question of cycling of blood neutrophil concentrations and pitfalls in the analysis of sampled data. Blood. 41: 85–91.

Milnor, W. R. (1972). Pulsatile blood flow. New Eng. J. Med. 287: 27–34.

Nedbal, L. and V. Bˇreznia (2002). Complex metabolic oscillations in plants forced by harmonic irradiance. Biophys. J. 83: 2180–2189.

Nyquist, H. (1928). Thermal agitation of electric charge in conductors. Phys. Rev. 32: 110–113.

Orear, J. (1982). Least squares when both variables have uncertainties. Am. J. Phys. 50: 912–916.

Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery (1992). Numerical Recipes in C: The Art of Scientific Computing, 2nd ed., reprinted with corrections, 1995. New York, Cambridge University Press.

Visscher, P. B. (1996). The FFT: Fourier transforming one bit at a time. Comput. Phys. 10(5): 438–443.

Wiesenfeld, K., and F. Jaramillo (1998). Minireview of stochastic resonance. Chaos 8(3): 539–548.

12

Images

Images are very important in the remainder of this book. They may be formed by the eye, a camera, an x- ray machine, a nuclear medicine camera, magnetic resonance imaging, or ultrasound. The concepts developed in Chapter 11 can be used to understand and describe image quality. The same concepts are also used to reconstruct computed tomographic or magnetic resonance slice images of the body. A very complete, advanced mathematical treatment of all kinds of images is found in a 1500-page book by Barrett and Myers (2004).

The convolution integral of Sec. 12.1 shows how the response of a linear system can be related to the input to the system and the impulse (δ-function) response of the system. It forms the basis for the rest of the chapter. The Fourier-transform properties of the convolution are also described in this section. Section 12.2 introduces quantitative ways to relate the image to the object, using the techniques developed in Chapter 11 to describe the blurring that occurs. Section 12.3 shows the importance of di erent spatial frequencies in an image and their e ect on the quality of the image.

Sections 12.4 and 12.5 pose the fundamental problem of reconstructing slices from projections and introduce two techniques for solving it: the Fourier transform and filtered back projection. Section 12.6 provides a numerical example of filtered back projection for a circularly symmetric object.

This chapter is quite mathematical. The key understanding to take from it is the relationship between spatial frequencies and image quality in Sec. 12.3.

12.1The Convolution Integral and its Fourier Transform

12.1.1One Dimension

We now apply the techniques developed in Chapter 11 to describe the formation of images. An image is a function of position, usually in two dimensions at an image plane. We start with the simpler case of an image extending along a line. Functions of time are easier to think about, so let’s imagine a one-dimensional example that is a function of time: a high-fidelity sound system. A hi-fi system is (one hopes) linear, which means that the relationship between the output response and a complicated input can be written as a superposition of responses to more elementary input functions. The output might be the instantaneous air pressure at some point in the room; the input might be the air pressure at a microphone or the magnetization on a strip of tape.

It takes a certain amount of time for the signal to propagate through the system. In the simplest case the response at the ear would exactly reproduce the response at the input a very short time earlier. In actual practice the response at time t may depend on the input at a number of earlier times, because of limitations in the electronic equipment or echoes in the room. If the entire system is linear, the output g(t) can be written as a superposition integral, summing the weighted response to inputs at other times. If f (t ) is the input and h is the weighting, the output g(t) is

326 12. Images

 

 

g(t) =

f (t )h(t, t ) dt .

(12.1)

−∞

Variable t is a dummy variable. The integration is over all values of t and it does not appear in the final result, which depends only on the functional forms of f and h. Note also that if f and g are expressed in the same units, then h has the dimensions of s1.

If input f is a δ function at time t0, then

 

 

g(t) =

δ(t − t0)h(t, t ) dt = h(t, t0).

(12.2)

 

−∞

 

We see that h(t, t ) is the impulse response of the system to an impulse at time t . If the impulse response of a linear system is known, it is possible to calculate the response to any arbitrary input.

If, in addition to being linear, the system responds to an impulse the same way regardless of when it occurs, the system is said to be stationary. In the hi-fi example, this means that no one is adjusting the volume or tone controls. For a stationary system the impulse response depends only on the time di erence t − t :

h(t, t ) = h(t − t ),

(12.3)

and the superposition integral takes the form

 

 

g(t) =

f (t )h(t − t ) dt .

(12.4a)

−∞

This is called the convolution integral. It is often abbreviated as

g(t) = f (t) h(t).

(12.4b)

For the hi-fi system the function h(t − t ) is zero for all t larger (later) than t; the response does not depend on future inputs. For the images we will be considering shortly, where the variables represent positions in the object and image, h can exist for negative arguments.

We saw an example of the impulse response in Sec. 11.15, where we found that the solution of the di erential equation for the system was a step exponential, Eq. 11.83. For that simple linear system we can write

%

0,

t < t

h(t − t ) =

(11)e

(t t )1

(12.5)

 

, t > t .

We have seen superposition integrals before: for onedimensional di usion (Eq. 4.73) and for the potential (Eq. 7.21) and magnetic field (Eq. 8.12) outside a cell.

There is an important relationship between the Fourier transforms of the functions appearing in the convolution integral, which was hinted at in Sec. 11.15. If the sine and cosine transforms of function h are denoted by Ch(ω) and Sh(ω), with similar notation for f and g, the relationships can be written

Cg (ω) = Cf (ω)Ch(ω) − Sf (ω)Sh(ω),

(12.6a)

Sg (ω) = Cf (ω)Sh(ω) + Sf (ω)Ch(ω).

This is called the convolution theorem. If we were using complex exponential notation, the Fourier transforms would be related by

G(ω) = F (ω)H(ω).

(12.6b)

The convolution of two functions in time is equivalent to multiplying their Fourier transforms.

Equations 12.6a are similar to the addition formulas for sines and cosines, which are of course used in the derivation. To derive them, we take the Fourier transforms of f and h:

f (t ) =

1

[Cf (ω) cos ωt + Sf (ω) sin ωt ] dω,

 

 

 

 

 

2π

−∞

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

h(t − t ) =

1

 

 

 

 

 

 

 

2π

−∞

 

[Ch(ω) cos ω(t − t ) + Sh(ω) sin ω(t − t )] dω.

Then

 

 

 

 

 

 

 

 

f (t )h(t − t ) dt

 

g(t) =

−∞

 

 

 

 

 

 

 

 

 

 

1

 

2

 

 

=

 

 

 

 

 

dt

[Cf (ω) cos ωt + Sf (ω) sin ωt ]

2π

 

 

 

 

 

−∞

 

−∞

 

 

 

 

 

 

 

 

 

 

×

 

 

[Ch(ω ) cos ω (t − t ) + Sh(ω ) sin ω (t − t )] .

−∞

 

 

 

 

 

 

 

We can use the trigonometric addition formulas and the fact that sin(−ω t ) = sin ω t to rewrite and expand this expression, much as we did in the last chapter. Carrying out the integration over t first and using the properties of integrals of the δ function gives

g(t) =

 

1

 

 

 

−∞[Cf (ω)Ch(ω) − Sf (ω)Sh(ω)] cos ωt

2π

+

 

1

[Cf (ω)Sh(ω) + Sf (ω)Ch(ω)] sin ωt.

 

 

 

2π

−∞

 

 

 

 

Comparison of this with Eqs. 11.55 proves Eq. 12.6a. Fourier techniques need not be restricted to frequency

and time. The quality and resolution of the image on the retina, an x-ray film, or a photograph are best described in terms of spatial frequency. The distance across the image in some direction is x, and a sinusoidal variation in the image would have the form A(k) sin(kx −φ). The angular spatial frequency k has units of radians per meter. It is k = 2π/λ, where λ is the wavelength, in analogy to ω = 2π/T . Alternatively, we can use the spatial frequency 1, with units of cycles per meter or cycles per millimeter.

12.1.2Two Dimensions

The convolution and Fourier transform in two dimensions are needed to analyze the response of a system that forms

12.2 The Relationship Between the Object and the Image

327

a two-dimensional image of a two-dimensional object. The object can be represented by function f (x , y ) in the object plane. The image is given by a function g(x, y) in the image plane:

12.2The Relationship Between the Object and the Image

12.2.1Point-Spread Function

 

∞ ∞

g(x, y) =

f (x , y )h(x, x ; y, y ) dx dy . (12.7)

 

−∞ −∞

If the contribution of object point (x , y ) to the image at (x, y) depends only on the relative distances x − x and y − y , then the two-dimensional impulse response is h(x − x , y − y ), and the image is obtained by the twodimensional convolution

 

∞ ∞

 

g(x, y) =

f (x , y )h(x − x , y − y ) dx dy

 

−∞ −∞

(12.8a)

or

 

 

 

g(x, y) = f (x, y) h(x, y).

(12.8b)

The Fourier transform in two dimensions is defined by

Suppose that an object in the x y plane is described by a function L(x , y ) that varies from place to place on the object. The image is

Eimage(x, y) = L(x , y )h(x, y; x , y ) dx dy . (12.12)

Function h is called the point-spread function. The pointspread function tells how information from a point source at (x , y ) spreads out over the image plane. It receives its

name from the following. If we imagine that the object is a point described by L(x , y ) = (x − x0)δ(y − y0), then integration shows that

Eimage = h(x, y; x0, y0).

 

 

 

 

 

The point-spread function has the same functional form

 

1

2

dky [C(kx, ky ) cos(kxx+ ky y)

as the image from a point source, just as did the impulse

f (x, y) =

 

 

dkx

response in one dimension.

 

 

 

 

2π

 

 

 

 

 

 

−∞

−∞

 

You can verify that the point-spread function for an

 

 

(12.9a)

 

 

 

 

ideal imaging system with magnification m is

 

+ S(kx, ky ) sin(kxx + ky y)].

 

 

 

 

 

 

 

The coe cients are given by

 

h(x, y; x , y ) = m2δ(x

mx )δ(y

my ).

(12.13)

 

 

 

 

 

C(kx, ky ) =

dx dyf (x, y) cos(kxx + ky y),

−∞ −∞

(12.9b)

S(kx, ky ) =

dx dyf (x, y) sin(kxx + ky y).

−∞ −∞

(12.9c) The Fourier transforms of the functions in the convolution are related by equations similar to those for the one-dimensional convolution.

Cg (kx, ky ) = Cf (kx, ky )Ch(kx, ky )

−Sf (kx, ky )Sh(kx, ky ),

(12.10)

Sg (kx, ky ) = Cf (kx, ky )Sh(kx, ky ) +Sf (kx, ky )Ch(kx, ky ).

With complex notation we would define the two-

dimensional Fourier transform pair by

 

 

 

 

∞ ∞

 

F (kx, ky ) =

f (x, y)ei(kx x+ky y) dxdy,

 

 

 

−∞ −∞

 

f (x, y) =

1

2 ∞ ∞ F (kx, ky )ei(kx x+ky y)dkxdky ,

 

 

2π

−∞ −∞

(12.11a)

 

 

 

and the convolution theorem would be

 

G(kx, ky ) = F (kx, ky )H(kx, ky ).

(12.11b)

The δ functions pick out the values (x = x/m, y = y/m) in the object plane to contribute to the image at (x, y). You can make the verification by substituting Eq. 12.13 in Eq. 12.12 and using the properties of the δ function from Eq. 11.62.

This discussion assumes that intensities add. This is true when the oscillations of the radiant energy (such as the electric field for light waves) have random phases lasting for a time short compared to the measurement time. Such radiant energy is called incoherent.1

We have already seen that when the impulse response in a one-dimensional system depends on coordinate differences such as t − t (or x − x or x − mx ), the system is stationary. In this case it is also said to be space invariant: changing the position of the object changes the position of the image but not its functional form. Stationarity is easier to obtain in a system such as a hi-fi system than in an imaging system, but we usually assume that it holds in an imaging system as well. For a space-invariant system

Eimage(x, y) = L(x , y )h(x − mx , y − my ) dx dy .

(12.14)

1These arguments also work for coherent radiation, where the phases are important, but the point-spread function is for the amplitude of the wave instead of the square of the amplitude (intensity). The calculation then gives rise to interference and di raction e ects.