Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Analysis and Application of Analog Electronic Circuits to Biomedical Instrumentation - Northrop.pdf
Скачиваний:
205
Добавлен:
10.08.2013
Размер:
4.41 Mб
Скачать

Noise and the Design of Low-Noise Amplifiers for Biomedical Applications

371

Thus, F for this LIA architecture is found to be:

F =

SNRin

=

π

 

Bout

= 1.23

Bout

(9.126)

 

23 B

B

 

SNR

out

 

 

 

 

 

 

 

 

in

 

in

 

which can be made quite small by adjusting the ratio of Bout to Bin.

As an example of signal recovery by an LIA, consider an LIA of the analog multiplier architecture. Let the signal to be measured be vs(t) = 20 109 cos(2π 50 103 t). The Gaussian white noise present with the signal has a root power spectrum of η = 4 nVRMS/ Hz. The input amplifier has a voltage gain of Kv = 104 with a 100-kHz noise and signal bandwidth. Thus, the raw input MS SNR is:

SNRin1 =

(20 109 )2 2

=

2 1016 MSV

= 1.25 104 , or 39 dB (9.127)

(4 109 ) 105

 

1.6 1012 MSV

If the sinusoidal signal is conditioned by an (ideal) unity-gain LPF with a Q = 50 = center freq./bandwidth, the noise bandwidth B = 50 103/50 = 1 kHz. The MS SNR at the output of the ideal BPF is still poor:

SNR

 

=

2 1016

MSV

= 1.25

102 , or 19 dB

(9.128)

fi1t

(4

109 )2

103

 

 

 

 

 

Now the signal plus noise is amplified and passed directly into the LIA and the LIA’s output LPF has a noise bandwidth of 0.125 Hz. The output dc component of the signal is KvVs/2 V. Thus, the MS dc output signal is vso2 =

Kv2 Vs2/4 = Kv2 1 1016 MSV. The MS noise output is vno2 = Kv2 (4 109)2 0.125 MSV; the MS SNRout = 50, or +17 dB.

The costs for using an LIA are that the ac signal is reduced to a proportional dc level and that the output LPF means that the LIA does not reach a new steady-state output, given a change in Vs, until about three time constants of the LPF (about 3 sec in the preceding example). The benefit of using an LIA is that coherent signals buried in up to 60-dB noise can be measured.

9.8.7Signal Averaging of Evoked Signals for Signal-to-Noise Ratio Improvement

9.8.7.1Introduction

The signal-to-noise ratio (SNR) of a periodic signal recorded with additive noise is an important figure of merit that characterizes the expected resolution of the signal. SNRs are typically given at the input to a signal conditioning system, as well as at its output. SNR can be expressed as a positive real number or in decibels (dB). The SNR can be calculated from the MS, RMS, or peak signal voltage divided by the MS or RMS noise voltage in a defined

© 2004 by CRC Press LLC

372 Analysis and Application of Analog Electronic Circuits

noise bandwidth. If the MS SNR is computed, the SNR(dB) = 10 log10 (msSNR); otherwise it is SNR(dB) = 20 log10(RMSSNR)

Signal averaging is widely used in experimental and clinical electrophysiology in order to extract a repetitive, quasi-deterministic, electrophysiological transient response buried in broadband noise. One example of the type of signal extracted is the evoked cortical response recorded directly from the surface of the brain (or from the scalp) by an electrode pair while the subject is given a repetitive periodic sensory stimulus, such as a tone burst, flash of light, or tachistoscopically presented picture. Every time the stimulus is given, a “hardwired” electrophysiological transient voltage, sj (t), lasting several hundred MS is produced in the brain by the massed activity of cortical and deeper neurons. When viewed directly on an oscilloscope or recorder, each individual evoked cortical response is invisible to the eye because of the accompanying noise.

Signal averaging is generally used to extract the evoked potential, s(t), from the noise accompanying it. Signal averaging is also used to extract evoked cortical magnetic field transients recorded with SQUID sensors (Northrop, 2002) and to extract multifocal electroretinogram (ERG) signals obtained when testing the competence of macular cones in the retina of the eye. A small spot of light illuminating only a few cones is repetitively flashed on the macular retina. ERG averaging is done over N flashes for a given spot position in order to extract the local ERG flash response, then the spot is moved to a new, known position on the macula and the process is repeated until a 2-D macular ERG response of one eye is mapped (Northrop, 2002).

Signal averaging is ensemble averaging; following each identical periodic

stimulus, the response can be written for the jth stimulus:

 

xj(t) = sj(t) + nj(t), 0 ≤ j N

(9.129)

where sj (t) is the jth evoked transient response and nj(t) is the jth noise following the stimulus. t is the local time origin taken as 0 when the jth stimulus is given.

The noise is assumed to be generally nonstationary, i.e., its statistics are affected by the stimulus. Assume that the noise has zero mean, however, regardless of time following any stimulus, i.e., E{n(t)} = 0, 0 ≤ t < Ti. Ti is the interstimulus interval. Also, to be general, assume that the evoked response varies from stimulus to stimulus; that is, sj(t) is not exactly the same as sj+1(t), etc.

Each xj(t) is assumed to be sampled and digitized beginning with each stimulus; the sampling period is Ts and M samples are taken following each stimulus. Thus, there are N sets of sampled xj(k), 0 ≤ k ≤ (M − 1); also, (M − 1)Ts = TD < Ti. TD is the total length of analog xj(t) digitized following each input stimulus (epoch length).

When the jth stimulus is given, the x(k)th value is summed in the kth data register with the preceding x(k) values. At the end of an experimental run, [x1(k) + x2(k) + x3(k) +… + xN(k)] in the kth data register. Figure 9.21 illustrates the organization of a signal averager. Early signal averagers were standalone dedicated instruments. Modern signal averagers typically use a PC or

© 2004 by CRC Press LLC

Noise and the Design of Low-Noise Amplifiers for Biomedical Applications

373

 

 

AAF

 

 

 

 

 

 

x = s + n

 

1

 

12

 

 

 

 

Input

 

0 0

 

S&H + ADC

xj(k)

 

 

 

 

f

 

 

 

 

 

 

 

(Sample, hold, convert & readout)

 

 

 

 

 

 

 

 

12 bit X M shift register buffer

 

 

 

 

 

 

 

 

 

 

Running

 

 

 

 

 

 

 

 

normalizer

 

 

 

 

 

 

 

 

j

 

 

 

 

0 1 2 3

M register main memory k

M 1

j

 

 

 

 

 

j

 

 

 

 

 

 

 

 

1 j N

 

 

 

 

 

 

Averaging

 

(average number)

 

 

 

 

 

 

 

 

 

 

 

 

 

controller

 

 

 

 

 

 

Set N averages

 

 

 

k

 

 

 

 

 

 

 

 

0 k M 1

Display

 

 

 

 

Trigger

(sample number)

W

 

 

 

 

pulses

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k

 

 

 

(HI if VT > Vφ)

 

k

 

 

 

 

 

 

 

 

1

N

 

 

 

 

 

 

 

Trigger

 

 

 

 

 

 

W =

xj(k)

input

VT

 

 

 

 

 

N j = 1

 

Vφ

Comp.

 

 

 

Controlling computer

 

Set φ

FIGURE 9.21

Block diagram of a signal averager. The memory and averaging controller are actually in the computer and are drawn outside for clarity.

laptop computer with a dual-channel A/D interface to handle the trigger event that initiates sampling the evoked transient plus noise, xj (t) = sj (t) + nj (t). Modern averagers give a running display in which the main register contents are continually divided by the running j value as j goes from 1 to N.

9.8.7.2Analysis of SNR Improvement by Averaging

The average contents of the kth register after N epochs are sampled can be written formally

 

 

=

1

N

 

(k) +

1

N

nj (k), 0 ≤ k ≤ (M − 1)

 

x(k)

N

 

sj

 

(9.130)

N

N

 

 

 

 

j=1

 

 

 

j=1

 

 

© 2004 by CRC Press LLC

374

Analysis and Application of Analog Electronic Circuits

where the left-hand summation is the signal sample mean at t = kTs after N stimuli and the right-hand summation is the noise sample mean at t = kTs after N stimuli.

It has been shown that the variance of the sample mean is a statistical measure of its noisiness. In general, the larger N is, the smaller the noise variance will be. The variance of the sample mean of x(k) is written as:

 

 

 

 

N

˘

2 ÷

 

 

 

 

 

1

 

 

 

 

2

 

 

 

 

 

 

Var{x(k)N }= E

 

xj

(k)˙

˝

x(k)

 

 

 

 

 

N

j=1

˙

 

 

 

 

 

©

 

 

˚

˛

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¬

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

 

 

˘

 

 

 

N

 

˘÷

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

1

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Var{x(k)N }= E

 

 

 

 

xj

(k)˙

 

 

 

xi (k)˙˝

x(k)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

j=1

 

 

 

˙ N i =1

 

˙

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

©

 

 

 

 

 

 

 

˚

 

 

 

 

 

 

˚˛

 

 

 

 

 

 

 

 

 

 

 

{

 

 

 

 

N }

 

 

 

 

 

 

 

 

 

¬

 

 

 

 

 

 

 

}

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2 {

j

 

 

 

i

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

N

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Var

x(k)

 

 

=

 

 

 

 

 

 

 

E x (k)x (k)

x(k) 2

 

 

 

 

 

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

j =1

i =1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¬

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

{

 

 

N }

 

 

 

 

 

 

(N terms, j=1)

 

 

 

 

 

 

 

 

 

(N2 =N terms)

}

 

 

 

 

 

 

 

 

 

2 { j

}

 

 

 

2 { j

 

i

 

 

 

 

 

 

 

 

 

 

=

1

 

N

E x2 (k)

+

1

 

N

 

 

N

 

(k)x

(k) − x(k) 2 (9.131)

 

 

Var

x(k)

 

 

 

 

 

 

 

 

 

 

E x

 

 

 

 

N

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

j =1

 

 

 

 

 

 

 

 

 

 

 

j =1

 

i =1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

j π1

 

 

 

 

 

 

 

Now for the N squared terms:

 

}

 

 

 

 

{ j

 

} { j

 

} { j

}

{ j

}

 

[

 

j

 

 

 

j

]

˝

 

 

 

{ j

 

 

 

 

 

 

 

E x

2

(k) = E

 

s (k)

+ n (k)

2

÷

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

(k)

 

©

 

˛

= E s

(k) + 2E s (k) E n (k) + e n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(9.132)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

= σs2 (k) + s(k)2 + σn2 (k)

and, for the (N2 N) terms with unlike indices:

E x

(k)x

(k)

= E

s

(k) + n

(k)

[s

(k) + n

(k)]

 

 

 

 

 

{ j

i

}

{[

 

j

 

j

 

]

i

 

 

i

 

}

 

 

 

 

(9.133)

 

 

 

{

j

 

i

}

 

{

j

i

 

}

{ j

i

i

j

}

 

 

 

= E s

(k)s

(k)

 

+ E n

(k)n

(k)

+ E s

(k)n

(k) + s

(k)n

(k)

 

Several important assumptions are generally applied to the preceding equation. First, noise and signal are uncorrelated and statistically independent, which means that E{s n} = E{s} E{n} = E{s} 0 = 0. Also, assuming that

© 2004 by CRC Press LLC

Noise and the Design of Low-Noise Amplifiers for Biomedical Applications

375

noise samples taken at or more than t = T seconds apart will be uncorrelated

leads to E{nj(k) ni (k)} = E{nj (k)} E{ni (k)} 0. So, E{xj (k) xi (k)} = E{[sj (k) si (k)}. It is also assumed that sj (k) and si (k) taken at or more than T seconds apart

are independent. So finally:

{ j

i

}

{[

j

i

]}

{ j

}

i

 

 

 

 

E x

(k)x

(k)

= E

s

(k)s

(k)

= E s

(k)

E{s

(k)} =

s(k)

2

(9.134)

Now, putting all the terms together:

 

 

 

1

 

Var{x(k)N }=

 

 

N

N

2

 

 

 

 

©

σs2

(k) +

 

 

2

+ σn2 (k)˙˘

+ (N2

− N)

 

2

˝÷

 

2

s(k)

s(k)

s(k)

 

 

 

 

 

 

˚

 

 

 

 

˛

 

 

 

 

 

 

 

 

 

¬

 

 

 

 

 

 

 

 

Var{

 

N

}= σs2 (k) + σn2 (k)

 

 

 

 

x(k)

 

 

 

 

 

 

 

 

 

 

 

N

 

 

 

 

 

 

 

(9.135A)

(9.135B)

The variance of the sample mean for the kth sample following a stimulus is a measure of the noisiness of the averaging process. The variance of the averaged signal, x, is seen to decrease as 1/N, where N is the total number of stimuli given and of responses averaged.

Another measure of the effectiveness of signal averaging is the noise factor,

F ∫ Sin Nin , where Sin is the mean-squared input signal to the averaging process;

So No

Nin is the MS input noise; So is the MS output signal; and No is the MS output noise. The noise factor is normally used as a figure of merit for amplifiers. Because amplifiers generally add noise to the input signal and noise, the output signal-to-noise ratio (SNR) is less than the input SNR; thus, for a nonideal amplifier, F > 1. For an ideal noiseless amplifier, F = 1. The exception to this behavior is in signal averaging, in which the averaging process generally produces an output SNR greater than the input SNR, making F < 1. Note that the noise figure of a signal conditioning system is defined as:

NF 10 log10(F) dB

(9.136)

From the preceding calculations on the averaging process:

 

(k)

= E s2

(k)

 

= σ

2 (k) +

 

2

 

S

 

s(k)

(9.137)

in

 

 

{

j

}

 

s

 

 

 

 

 

 

N

in

(k) = E n

2

(k)

= σ 2

(k)

 

(9.138)

 

 

 

 

{

j

 

}

 

n

 

 

 

 

o

 

 

 

2

}

 

σs (k)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

{ o

 

2

 

 

 

 

 

S (k) = E s (k)

 

=

 

 

+ s(k)

2

(9.139)

 

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

© 2004 by CRC Press LLC

376

Analysis and Application of Analog Electronic Circuits

 

No

(k) = σn2 (k) + σs2 (k)

(9.140)

 

 

N

 

These terms can be put together to calculate the noise factor of the averaging process:

 

σs2

(k) +

 

2 ˙˘

[1+ σs2

(k) σn2

(k)]

s(k)

F =

 

 

 

˚

 

 

 

 

 

(9.141)

 

[σs2 (k) + N

 

 

]2

 

 

 

s(k)

 

 

Note that if the evoked transient is exactly the same for each stimulus,

σs2(k) 0 and F = 1/N.

The reader should appreciate that this is an idealized situation; in practice, a constant level of noise, σa2, is present on the output of the signal averager. This noise comes from signal conditioning amplifiers, quantization accompanying analog-to-digital conversion, and arithmetic round-off. The averager MS output noise can thus be written:

N

o

=

σn2 (k) + σs2 (k) + σ 2

mean-squared volts

 

 

N

a

 

 

 

 

 

 

and, as before, the MS signal is:

So (k) = σs2 (k) + s(k)2

N

The averaged MS output SNR is just:

 

 

 

2

SNRo =

σs2 (k) + Ns(k)

σs2 (k) + σn2 (k) + Nσa2

and, if the evoked response is deterministic, σs2(k) 0, then:

 

N

 

2

SNRo =

s(k)

σn2 (k) + N σa2

(9.142)

(9.143)

(9.144)

(9.145)

Note that if the number, N, of stimuli and responses averaged becomes very large, then

 

 

 

 

 

2

 

SNR

 

s(k)

(9.146)

o

 

σ 2

 

 

 

 

 

 

 

 

a

 

© 2004 by CRC Press LLC