Ординатура / Офтальмология / Английские материалы / Handbook of Optical Coherence Tomography_Bouma, Tearney_2002
.pdf
Full-Field OCM |
311 |
Second, because the use of a polarization interferometer is imposed by the modulation method, artifacts may occur when imaging structures having a very strong polarization backscattering dependence, such as collagen or nerve fibers.
Finally, in contrast with scanning OCM systems, full-field illumination precludes the use of a confocal spatial filter to enhance scattered light rejection. Nevertheless, we show in Section 11.4.2 that when using high NA objectives a resolution similar to that of confocal microscopes is achieved.
11.3.4Signal Acquisition and Processing Instrumentation Principles
When the round-trip optical path difference between the object and reference beams is smaller than the coherence length of the source (see Section 11.4.1), the two beams interfere. The intensity IðtÞ as a function of time on each pixel of the CCD camera can then be expressed as
IðtÞ ¼ I0 þ AS2 þ AR2 þ 2ASAR cosð þ sinð!tÞÞ |
ð11Þ |
where I0 is the intensity of the incoherent light (which does not interfere with the light from the reference beam), AS expði SÞ and AR expði RÞ are the complex amplitudes of the mutually coherent waves reflected by the object and by the reference mirror, respectively, and ¼ R S. As described earlier, a photoelastic modulator introduces a sinusoidal phase variation of amplitude and frequency f ¼ !=2 ¼ 50 kHz between the object and reference waves. IðtÞ contains a constant noninterference term and a time-modulated interference term that is proportional to the amplitude AS. Using the nth Bessel function of the first kind Jn, the intensity IðtÞ can be expressed as
IðtÞ ¼ I0 þ AS2 þ AR2 þ 2ASARJ0ð |
Þ cos |
|
X |
|
|
þ1 |
J2nð Þ cosð2n!tÞ |
|
þ 4ASAR cos |
||
n¼1 |
|
ð12Þ |
X |
|
|
þ1 |
J2nþ1ð |
Þ sinðð2n þ 1Þ!tÞ |
4ASAR sin |
||
n¼0 |
|
|
The light emitted by the LED is actually also modulated at the resonant modulator’s frequency f ¼ 50 kHz. Four successive square modulations MpðtÞ are applied to the LED supply current (Fig. 6). The square modulations MpðtÞ can be
written by using Fourier series decomposition as |
|
|
|
|
|
|||||||||||
|
|
|
1 |
|
1 |
X |
|
|
|
|
|
|
|
|
|
|
M |
|
t |
|
þ |
|
þ1 |
sinðn=4Þ |
cos |
n |
!t |
þ |
p=2 |
ÞÞ |
ð |
13 |
Þ |
|
p¼0;1;2;3 |
ð Þ ¼ |
4 |
2 n¼1 n=4 |
ð |
ð |
|
|
|
|||||||
The intensity received by each pixel of the CCD camera is then the product IðtÞ MpðtÞ. Because the camera readout frequency of 200 Hz is much lower than the modulator frequency of 50 kHz, the signal delivered by each pixel of the CCD camera array is proportional to the time average hIðtÞ MpðtÞi. Four images corresponding to the four time shifts applied to the modulation MpðtÞ are successively recorded, the signal delivered by each pixel of these four images being
312 |
Saint-Jalmes et al. |
Figure 6 Phase sequencing. Square-wave modulations MpðtÞ are applied to the current supply of the light-emitting diode (LED).
Sp¼0;1;2;3 / 1IðtÞ M2pðtÞ |
2 |
|
|
|
|
|
|
|
|
|
|
|
|
|||||||
¼ |
|
|
I0 þ AS þ AR þ 2ASAR cos J0ð Þ |
|
|
|||||||||||||||
4 |
|
|
|
|||||||||||||||||
|
|
|
4 |
|
|
|
X |
|
|
J n |
|
|
n |
|
|
|||||
|
|
|
|
|
|
þ1 |
|
|
|
|
|
|
||||||||
þ |
|
|
ASAR cos n |
|
|
|
2 ð Þ |
sin |
|
cos np |
|
ð14Þ |
||||||||
|
¼ |
1 |
2n |
2 |
|
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
|
|
X |
|
|
|
|
|
|
|
|
|
|
||
þ |
|
|
A A |
|
sin þ1 |
|
J2nþ1ð |
|
sin |
2n þ 1Þ |
sin |
ð2n þ 1Þp |
|
|||||||
S |
R |
|
n 0 |
|
2n þ |
1Þ ð |
4 |
|
2 |
|||||||||||
|
|
|
|
|
|
|
|
¼ |
|
|
|
|
|
|
|
|
|
|
|
|
The calculation of the linear combinations Y and Z in Eqs. (15) and (16) give access to the product of the amplitude AS of the wave backscattered by the object and the constant amplitude AR of the wave reflected by the reference mirror. The optical phase can also be obtained [see Eq. (17)].
16
Y ¼ 2ðS3 S1Þ ¼ ASAR sin 1
ð15Þ
16
Z ¼ S0 S1 þ S2 S3 ¼ ASAR cos 2
with
|
|
X |
1 n J4nþ2ð Þ |
|
|
|
|
|
|||||||
|
þ1 |
|
|
|
|
|
|||||||||
|
1 |
¼ |
0 |
ð Þ |
4n |
þ |
2 |
|
|
|
|
|
|
||
|
|
n |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
¼ |
|
1 n J2nþ1ð |
Þ sin ð2n þ 1Þ |
ð16Þ |
|||||||||
|
|
þ1 |
|||||||||||||
|
|
X |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 ¼ |
0 |
ð Þ |
2n |
þ |
1 |
|
|
4 |
|
|
|
|||
|
|
n |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
¼ |
|
|
|
|
|
|
|
|
|
|
|
|
|
AS / Y2= 12 þ Z2= 22 |
1=2 |
|
|
|
|
||||||||||
|
|
ð |
17 |
Þ |
|||||||||||
¼ arctan 2 |
Z |
|
|
|
|||||||||||
|
|
|
|
1 |
|
Y |
|
|
|
|
|
|
|
||
We point out that interference microscopes detect the amplitude of the optical wave reflected by the object rather than its intensity. Intensity images (as produced by a classical microscope) can, of course, be obtained by calculating the squared
Full-Field OCM |
313 |
amplitude images. An interference microscope can also provide phase images, proportional (modulo =2) to the height between the surface of the object and the surface of the reference mirror [23,24]. Unwrapping the phase images gives a 3D representation of the surface.
In practice, our camera is operated at 200 frames per second (fps). Processed images can thus be produced at the rate of 50 per second. Several images are usually averaged to improve the signal-to-noise ratio (see Section 11.4.4).
Architecture Overview
Optical coherence microscopy hardware and software specifications are tabulated below.
Frequency reference |
50 kHz |
Light source |
Switched LED |
Primary modulation |
Photoelastic (polarization) |
Secondary modulation |
Light source |
Detection technique |
Amplitude and phase (four-phase process) |
Camera |
CA-D1-256S (Dalsa, Waterloo, ON, Canada) |
Our design consists of three devices (Fig. 7): an image sensor (camera), a home-made electronic controller (‘‘sequencer’’), and a computer equipped with a frame grabber. To perform the signal detection, the camera is synchronized with the reference frequency by the sequencer. The frame grabber transfers the camera frames into the computer main memory. In real time, the computer performs the linear combinations of frames involved in the ‘‘multiplexed lock-in detection’’ and displays the result. Demodulated images are usually averaged to increase their SNR.
‘‘Multiplexed lock-in detection’’ is a general method used for several physical experiments (see Table 2). Various modulation frequencies, modulation methods, light sources, and cameras are used. Consequently, the system design has to be flexible, and, from this standpoint, several choices are made.
Figure 7 Synchronous imaging kernel architecture. Three devices , two elemental tasks : grab þ processing , one hardware þ software kernel.
314 |
|
|
Saint-Jalmes et al. |
Table 2 ‘‘Multiplexed Lock-In Detection’’ Based Experiments |
|
||
|
|
|
|
|
Biological media |
OCM experiments |
Photothermal |
|
speckle imaging [29] |
[13, 22, 24] |
imaging [30] |
|
|
|
|
Frequency reference |
2.25 MHz |
50 kHz |
Variable |
Primary modulation |
Ultrasound |
Photoelastic |
Voltage |
Secondary modulation |
Light source |
Light source |
Light source |
Light source |
Switched Laser |
Switched LED |
Switched LED |
Camera |
256 256 pixels |
256 256 pixels |
256 256 pixels |
|
203 fps |
203 fps |
203 fps |
Detection technique |
Amplitude and |
Amplitude and |
Amplitude |
|
phase (four- |
or phase |
|
|
image process) |
|
|
|
|
|
|
First, a simple frame grabber is used rather than one with an embedded computing unit such as DSP or programmable logic devices. Obviously, the latter category provides image processing facility and computation power, but these boards have a major drawback: changing the frame grabber model implies the need to entirely rewrite the processing code, because each board holds its specific software architecture (language, function libraries). On the other hand, performing the image processing on the host computer allows the frame grabber to be replaced as easily as the camera without important changes in the software.
Second, we use a conventional PC architecture (Wintel). Since all frame grabber manufacturers provide software drivers for this architecture, we can easily switch between different cameras and acquisition boards. This advantage can be extended to other PC boards necessary in a given experiment (motor controller boards, PIA boards, etc.).
Description of the Acquisition Elements
Both frame grabbing and processing are executed in parallel to achieve the best efficiency allowed by the experimental setups (see Section 11.2.2). This real-time process is the heart of the multiplexed lock-in technique. The frame grabber, the computer, and the sequencer together form hardware and software kernal that we will subsequently refer to as the synchronous image kernel (SIK).
Obviously, the SIK technical part is complex, and it will not be described here. Three relevant technical topics are outlined in the next paragraphs:
1.Camera synchronization: camera operation, exposure time control
2.Sequencer: secondary modulation synthesis, camera synchronization, design
3.Frame grabbing and processing: camera data stream acquisition, memory management, double-buffering algorithm, computation speed optimization
This discussion points out some problems that are generally encountered with any real-time imaging instrument.
Full-Field OCM |
315 |
Camera (Image Sensor Subset)
At this time, all experimental setups are using a Dalsa (Ontario, Canada) CA-D1- 256-S camera, embedding an IA-D1 Dalsa CCD image sensor. Figure 8 shows its spectral response and some specifications.
The frame transfer CCD architecture provides both measurement time optimization and 100% filling factor. Typical frame transfer operation is described below, and CAD1 timings are given.
Reading of the CCD matrix consists of two steps: high speed storage (HSS) and frame transfer (FT). Following an internal trigger, the HSS period starts and the image wells are transferred from the image area to the storage area. This period takes about M cycles of the pixel clock (M2 is the number of pixels in the CCD matrix). This time corresponds to the parallel shifting of all columns of the image sensor (Fig. 9a). Actual image readout (FT) immediately follows the HSS period. FT takes about M M cycles of the pixel clock. This time corresponds to the serial shifting of all the pixels of the CCD matrix (Fig. 9b). Overall, CCD readout takes the sum of the times necessary for HSS and FT.
Without external synchronization the camera operates in ‘‘free running mode,’’ meaning that each readout (HSS þ FT) is immediately followed by another readout (Fig. 10a). In free running mode, exposure time and FT time are equal. This mode gives the maximum readout frequency of the camera (203 fps for CA-D1-256). To control the exposure time, the readout internal trigger may be replaced by an external synchronization signal (Fig. 10b).
In our system, the exposure time of the CCD array is controlled by an electronic device (sequencer) to match the multiplexed lock-in detection requirements.
Sequencer (Synchronization Subset)
Because square waveforms permit demodulation of a wide range of modulated signals, the sequencer design is essentially digital. It can be separated into two parts: generator and trigger/counter. The generator part synthesizes a TTL-compa- tible secondary modulation square wave, whereas the trigger/counter block synchronizes the multiplexed lock-in detection operation.
Digital designs provide a simple way to delay square waves. The generator essentially consists of two counters that are shifted relative to one another. Thus the most significant bits (MSBs) of the counters are two TTL signals delayed with respect to each other (Fig. 11). Shifted counts are made thanks to a parallel-load counter (COUNTER 2). In practice, the MSB of COUNTER 1 (MSB 1) is phase-
Figure 8 Features of the Dalsa camera. CA-D1-256S.
316 |
Saint-Jalmes et al. |
Figure 9 Frame transfer architecture. (a) The active region collects the photoelectrons, then in a very brief time (high speed storage period) the charge is shifted to the storage region. (b) The image region is then ready to accumulate new charges while the pixels from the storage region are read (frame transfer period).
Figure 10 Camera exposure time control. (a) Frame transfer camera operates in ‘‘freerunning mode.’’ No exposure control signal is needed, and exposure time matches the frame transfer time. During frame transfer, the camera activates a FRAME VALID signal.
(b) Thanks to an exposure control signal (EXTERNAL SYNCHRO), exposure time can be controlled by an external electronic device.
Full-Field OCM |
317 |
Figure 11 Generator block -delayed secondary modulation. (a) Generator block synoptic.
(b) Timing diagrams; MSB 2 provides the secondary modulation expected.
locked onto the primary modulation (Fig. 12), and the MSB of COUNTER 2 (MSB 2) provides the -delayed secondary modulation.
According to Eq. (7) (Section 11.5.2), correct multiplexed lock-in detection is achieved only if CCD exposure time lasts an integral number of primary modulation periods. With a square wave primary modulation, accurate timings are easily achieved with a programmable binary counter (EXPOSURE COUNTER in Fig. 13) receiving the primary modulation on its clock input. Then, triggering the counter
Figure 12 Generator block or PLL frequency multiplier. To be frequencyand phase-locked on the primary modulation, counter 1 of the secondary modulation generator is included in the feedback loop of a phase-locked loop (PLL).
318 |
Saint-Jalmes et al. |
Figure 13 Trigger/counter block—Exposure control.
allows synchronization of the EXPOSURE COUNTER and the camera (HSS þ FT periods) (trigger block in Fig. 13). The trigger/counter block also provides a TTL signal used to avoid light flux integration during the HSS period of the camera (i.e., to switch off the light source during HSS).
Finally, the trigger/counter block manages the successive phases of the acquisition process by incrementing the wraparound register (DELAY REGISTER in Fig. 11) each time EXPOSURE COUNTER (Fig. 13) overloads. In addition, it gives a ‘‘first delay value’’ TTL signal that will be checked by the computer subset to synchronize the software part of the SIK design (Section 11.3.4).
As described earlier, the sequencer has to guarantee critical timings of secondary modulation synthesis and camera exposure time. At the same time, the sequencer has to be flexible to match various surrounding devices (Table 2). Flexibility and critical timing are opposite requirements that cannot be easily achieved by a discrete board design. Thus a field programmable gate array (FPGA) programmable logic device has been used in the sequencer design. FPGA integrated circuits realize any kind of digital function (combinatory and/or sequential) according to a program downloaded by the computer through a standard port (e.g., parallel port). Moreover, this reprogrammable architecture offers an easy way to debug.
We designed a motherboard to hold a Xilinx (San Jose, CA) FPGA, a power supply, and several TTL and RS422 buffers. These buffers—connected to the FPGA—allow the sequencer to be easily and quickly upgraded to work with different cameras. A daughter board includes some analog circuits used for primary modulation conditioning and for the PLL multiplier (see generator block discussion) (Fig. 14). Because these subsets belong to a daughter board, the most appropriate technology can be used according to the modulation frequency of the experiment.
Frame Grabbing and Processing (Computer Subset)
Prior to any digital design (hardware and software), data flow rate must be estimated. The CA-D1 maximum frame rate is 203 fps, its frame size is 256 256, and its data format is 8 bit, resulting in about 14 Mbytes/s continuous data flow rate. This data stream is stored in the PC main memory by a frame grabber. Current frame grabbers use a PCI bus, whose maximum bandwidth is 132 Mbytes/s (32 bits at 33 MHz). However, this peak value cannot be obtained in sustained data transfer. For a continuous data stream, 100 Mbytes/s data flow rate is a realistic value. Such frame grabbers are said to be ‘‘real-time,’’ meaning that frames are stored in PC host memory as fast as they are acquired.
Full-Field OCM |
319 |
Figure 14 Sequencer design implementation. Mother board: XC4003 Xilinx FPGA running up to 100 MHz provides a hardware programmable kernel to the sequencer. The daughter board brings together all application-specific hardware.
The CA-D1 camera is far from reaching the PCI bus bandwidth limit. The remaining available bandwidth is mainly used for the frame processing. Depending on the data treatment, the bandwidth actually needed by the SIK design can be 2, 3, 4, or more times greater. When software load becomes too important, frames are missed.
The data stream is asynchronous with respect to the operating system and the applications that are running on the computer. Avoiding frame misses during the acquisition can be guaranteed on a 100% basis only on real-time operating systems, which is not the case with Microsoft Windows. In fact, the data transfer is made by a device driver running on the computer. The device driver responds to a hardware event, an interrupt, indicating that the data must be moved. Windows does not have deterministic behavior, especially with respect to hardware interrupts. Nevertheless, thanks to the power of current PCs, quasi-deterministic behavior is achieved. In practice, the time determinism of the operating system behavior depends on its total software load (PCI þ CPU).
At this point in the description, each frame grabbed by the computer is anonymous in the multiplexed lock-in detection method. This means that the frames have no relationship to the delay value fixed by the sequencer between the modulation signals. Thus, a ‘‘first delay value’’ synchronization signal is used in the SIK. This signal is provided by the sequencer, and it is checked by a high speed digital port of the computer. This input line can be a generic input/output (I/O) available on the frame grabber or, alternatively, it can be provided by a digital I/O board installed in the computer.
320 |
Saint-Jalmes et al. |
In summary, a continuous grab is performed by the SIK. A time-checking algorithm detects possible frames missed during the concurrent grabbing/processing task. If a frame is missed, the current grab is discarded, the software is synchronized on the first delay value signal again, and the grab is repeated.
The SIK software was written in C because C is the language usually chosen by device driver developers. The software is then easily interlaced to functions libraries driving PC boards. SIK software assumes two basic functions: graphical user interface (GUI) and signal processing.
Briefly, GUI allows control of all experimental parameters and sequencer parameters such as frame grabbing and processing parameters. The latter concern the type of processing (amplitude, phase, etc.), the number of accumulations, and, in the case of an analog camera, the analog-to-digital converter setting of the frame grabber.
We now focus on the signal processing performed by the SIK software. Briefly, the frames acquired by the frame grabber are processed, and the resulting images are averaged to increase the SNR of the measurement. Real-time processing is performed by a double-buffering algorithm, also called ‘‘ping-pong’’ (Fig. 15). It uses a pair of buffers (B1 and B2) and two processing tasks (TA and TB). The B1 and B2 sets of buffers have the number of buffers needed by the multiplexed lock-in detection to retrieve the useful information (typically four in OCM applications). Each set
Figure 15 Double-buffer acquisition and processing. At the ‘‘reset’’ pass of the algorithm, and B1 buffers have the ‘‘ping’’ attribute and the B2 buffers have the ‘‘pong’’ attribute. While the TA task fills the ‘‘ping’’ B1 buffers, the TB task computes the ‘‘pong’’ B2 buffers. When both tasks terminate their work, B1 and B2 buffers attributes are switched, and TA/TB tasks are repeated. Again the TA task fills the ‘‘ping’’ buffers, but now these buffers are the B2 ones. In the same way, the TB task processes the ‘‘pong’’ buffers, which are now the B1 buffers.
