Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Wiley - Encyclopedia of Medical Devices and Instrumentation - Vol. 4.pdf
Скачиваний:
280
Добавлен:
10.08.2013
Размер:
15.09 Mб
Скачать

488 MICROSCOPY, FLUORESCENCE

Figure 12. Ultrathin sections of opossum’s optic nerve fibers 24 h after crash. Normal fibers (n) are seen among some altered fibers, which exhibit watery degeneration (star) and myelin sheath breakdown (thick arrow). Note demyelinated fibers (thin arrows) with an apparently intact axoplasmic cytoskeleton. Asterisk, astrocytic processes. (From Ref. 17. Reproduced by courtesy of Anais da Academia Brasileira de Ciencias.)

these tomograms remains difficult. To get significant information about specific structures in the cell, the images have to be evaluated using advanced pattern recognition methods. Existing structural models of cellular constituents at lower resolutions can guide the systematic evaluation of the tomograms. The aim is to visualize the complete 3D organization of the cell at molecular resolution. Structural evaluation by single particle analysis, electron crystallography and electron tomography is slow compared to other structure determination technologies, in particular X-ray crystallography. Processing time for the electronic technique is typically in the range of several months per solved structure, depending on the resolution achieved. The same task can be accomplished in the range of hours or days for X-ray crystallography, once suitable crystals are available. Continued joint efforts between the research community and manufacturers to develop user-friendly, universal interfaces between electron crystallography, single-particle analysis and electron tomography would improve this situation, and further expand the usefulness of of these electronic technologies.

BIBLIOGRAPHY

1.Nellist PD, et al Direct sub-angstrom imaging of a crystal lattice. Science 2004;305(5691):1741.

2.Freeman MR. Time -resolved scanning tunneling microscopy through tunnel distance modulation. Appl Phys Lett 1993;68(19):2633–2635.

3.Bonetta L. Zooming in on electron tomography. Nature Methods 2005;2(2):139–44.

4.Bozzola JJ, Russell LD. Electron Microscopy: Principals and techniques for biologists. Sudbury (MA): Jones and Bartlett Publishers; 1998.

5.Hayat MA. Principels and techniques of electron microscopy: Biological application. New York: Van Nostrand Reinhold Company; 1973.

6.Wischnitzer S. Introduction to electron microscopy. New York: Pergamon Press; 1981.

7.Meek GA. Practical electron microscopy for biologists. New York: John Wiley & Sons inc; 1976.

8.Joy DC. Beam interactions, contrast and resolution in the SEM. J Microsc 1984;136:241–58.

9.Haine M. The electron microscope: The present state of the Art. London: Spon; 1961.

10.Dickersin GR. Diagnostic Electron Microscopy: A text/ atlas. New York: Springer-Verlag; 1999.

11.Franchina M, Del Borrello E, Caruso A, Altavilla G. Serous tumors of the ovary: Ultrastructural observations. Eur J Gynaecol Oncol 1992;13(3):268–76.

12.Wolf HK, Garcia JA, Bossen EH. Oncocytic differentiation in intrahepatic biliary cystadenocarcinoma. Modern Pathol 1992;5(6):665–866.

13.Kobayashi TK, et al. Effects of Taxol on ascites cytology from a patient with fallopian tube carcinoma: Report of a case with ultrastructural studies. Diagn Cytopathol 2002;27(2):132–134.

14.Yogi T, et al Whipple’s disease: The first Japanese case diagnosed by electron microscopy and polymerase chain reaction. Intern Med 2004;43(7):566–570.

15.Jensen HL, Norrild B. Herpes simplex virus-cell interactions studied by immunogold cryosection electron microscopy. Methods Mol Biol 2005;292:143–160.

16.Garrison RG, Boyd KS. Electron microscopy of yeastlike cell development from the microconidium of Histoplasma capsulatum. J Bacteriol 1978;133(1):345–353.

17.Narciso MS, Hokoc JN, Martinez AM. Watery and dark axons in Wallerian degeneration of the opossum’s optic nerve: Different patterns of cytoskeletal breakdown? An Acad Bras Cienc 2001;73(2):231–243.

See also ANALYTICAL METHODS, AUTOMATED; CELLULAR IMAGING; CYTOLOGY,

AUTOMATED.

MICROSCOPY, FLUORESCENCE

SERGE PELET

MICHAEL PREVITE

PETER T. C. SO

Massachusetts Institute of

Technology

Cambridge, Massachusetts

INTRODUCTION

Fluorescence microscopy quantifies the distribution of fluorophores and their biochemical environment on the

micron length scale and allows In vivo measurement of biological structures and functions (1–3). Heimsta¨dt developed one of the earliest fluorescence microscopes in 1911. Some of the first biochemical applications of this technique include the study of living cells by the protozoologist Provazek in 1914.

Fluorescence microscopy is one of the most ubiquitous tools in biomedical laboratories. Fluorescence microscopy has three unique strengths. First, the fluorescence microscope has high biological specificity. Based on endogenous fluorophores or exogenous probes, fluorescence microscopy allows the association of a fluorescence signal with a specimen structural and biochemical state. While fluorescence microscopy has comparable resolution to white light microscopes, their range of applications in biomedicine is much broader.

Second fluorescence microscopy is highly sensitive in the imaging of cells and tissues. The high sensitivity of fluorescence microscopy originates from two factors. One factor is the significant separation between the fluorophores’ excitation and emission spectra. This separation allows the fluorescence signal to be detected by efficiently rejecting the excitation radiation background using bandpass filters. The fluorescence microscope has the sensitivity to image even a single fluorophore. The other factor is the weak endogenous fluorescence background in typical biological systems. Since there is minimal background fluorescence, weak fluorescence signal from even a few fluorescent exogenous labels can be readily observed.

Third, fluorescence microscopy is a minimally invasive imaging technique. In vivo labeling and imaging procedures are well developed. While photodamage may still result from prolonged exposure of shorter excitation radiation, long-term observation of biological processes is possible. Today, a single neuron in the brain of a small animal can be imaged repeatedly over a period of months with no notable damage.

SPECTROSCOPIC PRINCIPLES OF FLUORESCENCE MICROSCOPY

Fluorescence Spectroscopy

An understanding of spectroscopic principles is essential to master fluorescence microscopy (4–6). Fluorescence is a photon emission process that occurs during molecular relaxation from electronic excited states. Historically, Brewster first witnessed the phenomenon of fluorescence in 1838 and Stokes coined the term fluorescence in 1852. These photonic processes involve transitions between electronic and vibrational states of polyatomic fluorescent molecules (fluorophores) by the absorption of either one or more photons. Electronic states are typically separated by energies on the order of 10,000 cm 1 and vibrational sublevels are separated by 102–103 cm 1. In a one-photon excitation process, photons with energies in the ultraviolet (UV) to the blue–green region of the spectrum are needed to trigger an electronic transition, whereas photons in the infrared (IR) spectral range are required for two-photon excitation. The molecules from the lowest vibrational level of the electronic ground state are excited to an accessible

MICROSCOPY, FLUORESCENCE

489

vibrational level in an electronic excited state. The molecule is quickly relaxed to the lowest vibrational level of the excited electronic state after excitation on the time scale of femtoseconds to picoseconds via vibrational processes. The energy loss in the vibrational relaxation process is the origin of the Stoke shift where fluorescence photons have longer wavelengths than the excitation radiation. The coupling of the ground and excited – state both for the absorption and emission process is governed by the Franck–Condon principle, which states that the probability of transition is proportional to the overlap of the initial and final vibrational wave function. Since the vibrational level structures of the excited and ground states are similar, the fluorescence emission spectrum is a mirror image of the absorption spectrum, but shifted to lower wavelengths. The shift between the maxima of the absorption and emission spectra is referred to as the Stokes’ shift. The residence time of a fluorophore in the excited electronic state before returning to the ground state is called the fluorescence lifetime. The fluorescence lifetime is typically on the order of nanoseconds. The Jablonski diagram represents fluorescence excitation and deexcitation processes (Fig. 1).

Fluorescence deexcitation processes can occur via radiative and nonradiative pathways. Radiative decay describes molecular deexcitation processes accompanied by photon emission. Molecules in the excited electronic states can also relax by nonradiative processes where excitation energy is not converted into photons, but are dissipated by thermal processes, such as vibrational relaxation and collisional quenching. Let G and k be the radiative and nonradiative decay rates, respectively, and N be the number of fluorophore in the excited state. The temporal evolution of the excited state can be described by

 

dN

¼ ðG þ kÞN

(1)

 

 

 

dt

N ¼ N0eðGþkÞt ¼ N0e t=t

(2)

S1

 

 

 

IC

T1

 

 

 

 

1p

IS

 

2p

F

 

S0

 

 

VL

P

 

Figure 1. A Jablonski diagram describing fluorescence (F) and phosphorescence (P) emission and excitation processes based on one-photon (1p) and two-photon (2p) absorption. The parameters S0, S1, and T1 are the electronic singlet ground state, singlet excited state, and triplet excited state, respectively. Here VL denotes vibrational levels, IC denotes internal conversion, and IS denotes intersystem crossing.

490 MICROSCOPY, FLUORESCENCE

The fluorescence lifetime, t, of the fluorophore is the combined rate of the radiative and nonradiative pathways:

1

(3)

t ¼ G þ k

One can define the intrinsic lifetime of the fluorophore in the absence of nonradiative decay processes as, t0:

1

(4)

t0 ¼ G

The efficiency of the fluorophore can then be quantified by the fluorescence quantum yield, Q, which measures the fraction of excited fluorophore relaxing via the radiative pathway:

Q ¼

G

¼

t

(5)

 

 

G þ k

t0

Environmental Effect on Fluorescence

A number of factors contributes to the nonradiative decay pathways of the fluorophores and reduces fluorescence intensity. In general, the nonradiative decay processes can be classified as

k ¼ kic þ kec þ ket þ kis

(6)

where kic is the rate of internal conversion, kec is the rate of external conversion, ket is the rate of energy transfer, and kis is the rate of intersystem crossing.

Internal conversion describes the process where the electronic energy is converted to thermal energy via a vibrational process. The more interesting process is external conversion, where fluorophores lose electronic energy in collision process with other solutes. Several important solute molecules, such as oxygen, are efficient fluorescence quenchers. The external conversion process provides a convenient mean to measure the concentration of these molecules in the microenvironment of the fluorophore. The fluorophore is deexcited nonradiatively upon collision. The collisional quenching rate can be expressed as

kec ¼ k0½Q&

(7)

where [Q] is the concentration of the quencher and k0 is related to the diffusivity and the hydrodynamics radii of the reactants.

When collisional quenching is the dominant nonradiative process, equation 1 predicts that fluorescence lifetime decreases with quencher concentration.

t0

¼ ð1 þ k0t0½Q

(8)

t

Collision quenching also reduces the steady-state fluorescence intensity, F, relative to the fluorescence intensity in the absence of quencher, F0. The Stern–Volmer equation describes this effect:

F0

¼ 1 þ k0t0½Q&

(9)

F

A related process is steady-state quenching, where fluorescence signal reduction is due to ground-state processes. A

fluorophore can be chemically bound to a quencher to form a dark complex, a product that does not fluoresce. In this case, steady-state fluorescence intensity also decreases with quencher concentration as

F0

¼ 1 þ Ks½Q&

(10)

F

where Ks is the association constant of the quencher and the fluorophore. However, since steady-state quenching is a ground-state process that only reduces the fraction of fluorophores available for excitation, fluorescence lifetime is not affected.

Resonance energy-transfer rate, ket, becomes significant when two fluorophores are in close proximity within 5– 10 nm as during molecular binding. The energy of an excited donor can be transferred to the accepted molecule via an induced dipole–induced dipole interaction. Let D represents the donor and A, the acceptor. Under illumination at the donor excitation wavelength, the number of excited donors and acceptors are ND, NA, respectively. Further, define the donor and acceptor deexcitation rates as kD and kA. The excited-state population dynamics of the donor and acceptor can be described as

 

dND

 

 

 

 

¼ ðkD þ ketÞND

(11)

 

dt

 

dNA

¼ kANA þ ketND

(12)

 

dt

Solving these equations provides the dynamics of donor and acceptor fluorescence:

 

 

 

ND ¼ N0Dexp½ð kD ketÞt&

(13)

 

A

D ket

 

 

N

 

¼ N0

 

½expð kDt kettÞ expð kAtÞ&

(14)

 

kA kD ket

The donor decay is a shortened single exponential, but the acceptor dynamics is more complex with two competing exponential processes.

The intersystem crossing rate, kis, describes transitions between electronic excited states with wave functions of different symmetries. The normal ground state is a singlet state with an antisymmetric wave function. Excitation of the ground-state molecule via photon absorption results in the promotion of the molecule to an excited state with an antisymmetric wavefunction, another singlet state. Due to spin–orbit coupling, the excited molecule can transit into a triplet sate via intersystem crossing. The subsequent photon emission from the triplet state is called phosphorescence. Since the decay of the triplet state to the singlet ground state is forbidden radiatively, the triplet excited state has a very long lifetime on the order of microseconds to milliseconds.

FLUORESCENCE MICROSCOPE DESIGNS

The components common to most fluorescence microscopes are the light sources, the optical components, and the detection electronics. These components can be configured to create microscope designs with unique capabilities.

Fluorescence Excitation Light Sources

Fluorescence excitation light sources need to produce photons with sufficient energy and flux level. The ability to collimate the emitted rays from a light source further determines its applicability in high resolution imaging. Other less critical factors, such as wavelength selectivity, ease of use, and cost of operation, should also be considered.

Mercury arc lamps are one of the most commonly used light sources in fluorescence microscopy. The operation of a mercury arc lamp is based on the photoemission from mercury gas under electric discharge. The photoemission from a mercury arc consists of a broad background punctuated by strong emission lines. A mercury lamp can be considered as a quasimonochromatic light source by utilizing one of these strong emission lines. Since mercury lamps have emission lines throughout the near-UV and visible spectrum, the use of a mercury lamp allows easy matching of the excitation light spectrum with a given fluorophore by using an appropriate bandpass filter. Mercury arc lamps are also low cost and easy to use. However, since the emission of mercury lamps are difficult to collimate, they are rarely used in high resolution techniques, such as confocal microscopy. The advent of high power, energy efficient, light-emitting diodes (LEDs) with a long operation life allows the design of new light sources that are replacing arc lamps in some microscopy applications.

Laser light sources are commonly used in high resolution fluorescence microscopes. Laser light sources have a number of advantages including monochromatcity, high radiance, and low divergence. Due to basic laser physics, the laser emission is almost completely monochromatic. For fluorescence excitation, a monochromatic light source allows very easy separation of the excitation light from the emission signal. While the total energy emission from an arc lamp may be higher than some lasers, the energy within the excitation band is typically a small fraction of the total energy. In contrast, lasers have high radiance: the energy of a laser is focused within a single narrow spectral band. Therefore, the laser emission can be more efficiently used to trigger fluorescence excitation. Furthermore, laser emission has very low divergence and can be readily collimated to form a tight focus at the specimen permitting high resolution imaging. Gas lasers, such as the argon–ion laser and helium–neon lasers, are commonly used in fluorescence microscopy. Nowadays, they tend to be replaced by solid-state diode lasers that are more robust and fluctuate less. Lasers can further be characterized as continuous wave and pulsed. While continuous wave lasers are sufficient for most applications, pulsed lasers are used in twophoton microscopes where high intensity radiation is required for efficient induction of nonlinear optical effects.

Microscope Optical Components

The optical principle underlying fluorescence microscopes can be understood using basic ray tracing (7,8). The ray tracing of light through an ideal lens can be formulated into four rules: (1) A light ray originated from the focal point of a lens will emerge parallel to the optical axis after the lens.

(2) A light ray propagating parallel to the optical axis will pass through the focal point after the lens. (3) Light rays

MICROSCOPY, FLUORESCENCE

491

(1)

(3)

AP

a

 

(4)

(2)

Figure 2. Four basic rules of optical ray tracing. (1) Light emerging from the focal point will become collimated parallel to the optical axis after the lens. And inversely (2) a collimated beam parallel to the optical axis will be focused at the focal plane of the lens. (3) A light source in the focal plane of the lens will become collimated after passing through the lens with an oblique angle determined by the distance from the optical axis and inversely (4) An oblique collimated beam will be focused in the focal plane by the lens. The numerical aperture of an imaging system is a function of the maximum convergence angle, a, as defined in rule 2. The maximum convergence angle is a function of the lens property and its aperture (AP) size.

originated from the focal plane of a lens will emerge collimated. (4) Collimated light rays incident upon a lens will focus at its focal plane (Fig. 2). From these rules, one can see that a simple microscope can be formed using two lenses with different focal lengths (Fig. 3). The lens, L1, with focal length, f1, images the sample plane and is called the objective. The lens, L2, with focal length, f2, projects the image onto the detector plane and is called the tube lens. From simple geometry, two points P1 and P2 separated by x in the sample plane will be separated by x(f2/f1) at the detector plane, where the ratio M ¼ f2/f1 is called the magnification. One can see that the image in the sample plane is enlarged by the magnification factor at the detector.

By using the common wide-field fluorescence microscope as an example, we can further examine the components of a

S

 

 

D

P2

 

 

P1′

X

 

 

P1

f 1 f 1

f 2

f 2

 

 

 

P2′

 

L1

 

L2

Figure 3. The detection path of a microscope. Lenses L1 and L2 are the objective and the tube lens, respectively. L1 has focal length f1 and L2 has focal length f2. For two points, P1 and P2 with separation, x, on the sample plane (S), these points are projected to points P1’ and P2’ on the detector plane (D).

492 MICROSCOPY, FLUORESCENCE

D

D

L2

L2

BR

 

L4

L4′ LS

 

 

BR

DC

L1

L1

S

S

L3

LS

(a)

(b)

Figure 4. Two configurations of fluorescence microscopy (a) trans-illumination and (b) epi-illumination. The objective and detection tube lenses are L1 and L2. The condenser is L3. The excitation relay lenses are L4 and L40. The sample and detector planes are S and D respectively. The light source is LS. The dichroic filter and the barrier filter are DC and BR.

complete fluorescence microscope system (Fig. 4a) (9). In addition to the detection optical path, fluorescence microscope requires an excitation light source. The excitation light source is typically placed in the focal point of a third lens, L3. The lens collimates the excitation light and projects it uniformly on the specimen (Koehler illumination). The lens, L3, is called the condenser. Since the excitation light is typically much stronger than the fluorescence emission, a bandpass filter is needed to block the excitation light. In this trans-illumination configuration, it is often difficult to select a bandpass filter with sufficient blocking power without also losing a significant portion of the fluorescence signal. To overcome this problem, an alternative geometry, epi-illumination, is commonly used (Fig. 4b). In this geometry, lens L1 functions both as the imaging objective and the condenser for the excitation light. A couple of relay lenses (L4, L4’) are used to focus the excitation light at the back aperture plane of the objective via a dichroic filter that reflects the excitation light but transmits the fluorescence signal. The excitation light is collimated by L1 and uniformly illuminates the sample plane. The fluorescence signal from the sample is collected by the objective and projected onto the detector via the tube lens L2. Since the excitation light is not directed at the detector, the task of rejecting excess excitation radiation at the detector is significantly easier. A barrier filter is still needed to eliminate stray excitation radiation from the optical surfaces.

From Fig. 2, one may assume that arbitrarily small objects can be imaged by increasing the magnification ratio. However, this is erroneous as the interference of light imposes a resolution limit on an optical system (10). The smallest scale features that can be resolved using fluorescence microscopy are prescribed by the Abbe limit.

For an infinitely small emitter at the sample plane, the image at the detector, the point spread function (PSF), is not a single point. Instead, the intensity is distributed according to an Airy function with a diameter, d:

d ¼ M

1:22l

(15)

NA

where M is the magnification of the system, l is the emission wavelength, and NA is the numerical aperture of the objective, which is defined as (Fig. 2):

NA ¼ n sin a

(16)

where a is the half-convergence angle of the light and n is the index of refraction of the material between the lens and the sample. Therefore, the images of two objects on the sample plane will overlap if their separation is <1.22l/NA. Since NA is always on the order of 1, an optical system can only resolve two separate objects if their separation is on the order of the wavelength of light.

Fluorescence Detectors and Signal Processing

Since the fluorescence signal is relatively weak, sensitive detectors are crucial in the design of a high performance fluorescence microscope. For a wide field microscope, the most commonly used detectors are charged couple device (CCD) cameras, which are area detectors that contain a rectilinear array of pixels. Each pixel is a silicon semiconductor photosensor called a photodiode. When light is incident upon an individual photodiode, electrons are generated in the semiconductor matrix. Electrodes are organized in the CCD camera such that the charges generated by optical photons can be stored capacitatively during the data acquisition. After data acquisition, manipulating the voltages of the electrodes on the CCD chip allows the charges stored in each pixel to be extracted from the detector sequentially and read out by the signal conditioning circuit. These cameras are very efficient devices with a quantum efficiency up to 80% (i.e., they can detect up to 8 out of 10 incident photons). Furthermore, CCD cameras can be made very low noise such that even four to five photons stored in a given pixel can be detected above the inherent electronic noise background of the readout electronics.

While CCDs are the detector of choice for wide-field microscopy imaging, there are other microscope configurations (discussed below) where an array detector is not necessary and significantly lower cost single element detectors can be used. Two commonly used single element detectors are avalanche photodiodes (APDs) and photomultiplier tubes (PMTs).

Avalanche photodiodes and photomultiplier tubes have been used in confocal and multiphoton microscopes. Avalanche photodiodes are similar to the photodiode element in a CCD chip. By placing a high voltage across the device, the photoelectron generated by the photon is accelerated across the active area of the semiconductor and collide with other electrons. Some of these electrons gain sufficient mobility from the collision and are accelerated toward the anode of the device themselves. This results in an avalanche effect with a total electron gain on the order

of hundreds to thousands. A sizable photocurrent is generated for each input photon. A normal photodiode or a CCD camera does not have single photon sensitivity because the readout electronic noise is higher than the single electron level. The gain in the avalanche photodiode allows single photon detection. Photomultiplier tubes operate on a similar concept. A photomultiplier is not a solidstate device, but a vacuum tube where the photons impact the cathode and generates a photoelectron using the photoelectric effect. The electron generated is accelerated by a high voltage toward a second electrode, called a dynode. The impact of the first electron results in the generation of a cascade of new electrons that are then accelerated toward the next dynode. A photomultiplier typically has 5–10 dynode stages. The electron current generated is collected by the last electrode, the anode, and is extracted. The electron gain of a photomultiplier is typically >1–10 million. While APDs and PMTs are similar devices, they do have some fundamental differences. The APD are silicon devices and have a very high quantum efficiency ( 80%) from the visible to the near-IR spectral range. The PMT photocathode material has a typical efficient of 20%, but can reach 40% in the blue–green spectral range. However, PMTs are not sensitive in the red–IR range with quantum efficiency dropping to a few percent. On the other hand, PMT have significantly higher gain and better temporal resolution.

Advanced Fluorescence Microscopy Configurations

In addition to wide-field imaging, fluorescence microscopy can be implemented in other more advanced configurations to enable novel imaging modes. We will cover four other particularly important configurations: wide-field deconvolution microscopy, confocal microscopy, two-photon microscopy, and total internal reflection microscopy.

Wide-Field Deconvolution Microscopy. Wide-field microscopy is a versatile, low cost, and widely used technique. However, cells and tissues are inherently three dimensional (3D). In a thick sample, the signals from multiple sample planes are integrated to form the final image. Since there is little correlation between the structures at different depths, the final image becomes fuzzy. The need for 3D resolved imaging has long been recognized. The iterative deconvolution approach has worked well for relatively thin specimen, such as in the imaging of organelle structures in cultured cells (11) (Fig. 5). In terms of instrument modifications, the main difference between deconvolution microscopy and wide-field microscope is the incorporation of an automated axial scanning stage allowing a 3D image stack to be acquired from the specimen. An initial estimate of the 3D distribution of fluorophores is convoluted with the known PSF of the optical system. The resultant image is then compared with the measured 3D experimental data. The differences allow a better guess of the actual fluorophore distribution. This modified fluorophore distribution is then convoluted with the system PSF again and allows another comparison with experimental data. This process repeats until an acceptable difference between the convoluted image and the experimental data

MICROSCOPY, FLUORESCENCE

493

(a)

(b)

Z = 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Figure 5. A comparison between (a) normal wide-field images and (b) deconvoluted images (11). Green fluorescent protein labeled mitochondria of a cultured cell was imaged by a widefield fluorescence microscope as a 3D image stack. The image stack is deconvoluted and the significantly improved result is shown. The axial position of the image stack is shown below in units of micrometers.

is achieved. The deconvolution process in a wide-field fluorescence microscope belongs to the class of mathematical problems called ill-posed problems (12–14). An illposed problem does not have a unique solution, but depends on the selection of approach constraints to reach a final solution. One should consider the deconvoluted images only as the best estimate of the real physical structure given the available data. Furthermore, deconvolution algorithm is computationally intensive and often fails in thick specimens.

Confocal Fluorescence Microscopy. Confocal fluorescence microscopy is a powerful method that can obtain 3D resolved sections in thick specimens by completely optical means (15–18). The operation principle of confocal microscopy is relatively straightforward. Consider the following confocal optical system in the transillumination geometry (Fig. 6). Excitation light is first focused at an excitation pinhole aperture. An excitation tube lens collimates the rays and projects them toward the condenser. The excitation light is focused at the specimen. The emitted light from the focal point is collected by the objective and collimated by the emission tube lens. The collimated light is subsequently refocused at the emission pinhole aperture. The detector is placed behind the aperture. As it is clear in the ray tracing illustration, the fluorescence signal produced at the specimen position defined by the excitation

LS L3′ L3 S L1

L2

D

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

EXA

EMA

Figure 6. The configuration of a simple confocal microscope. The objective and the detection tube lenses are L1, L2. The light source is LS. The excitation aperture placed in front of the light source is EXA. The relay lens that images the excitation aperture and projects the image of the pinhole onto the specimen (S) are L3 and L30. The fluorescence emission from the focal point (red rays) are projected onto the emission aperture (EMA) by L1 and L2. The signal is transmitted through EMA and is detected by the detector

(D). Fluorescence generated outside the focal plane in the specimen (blue rays) are defocused at EMA and are mostly blocked.

494 MICROSCOPY, FLUORESCENCE

Figure 7. A comparison between confocal (a) and wide field (b) imaging of a plasmacytoma cell labeled with fluorescent antiendoplasmin that binds mainly to the endoplasmic reticulum. In the wide-field image, it is not possible to determine whether the central nucleic region contains endoplasmin end the structure of the cisternae are unclear (19).

pinhole aperture is exactly transmitted through the conjugate pinhole in the emission light path. However, for a fluorescence signal generated above or below the focal plane, the light is defocused at the emission pinhole aperture and is largely rejected. Hence, a pair of conjugate pinholes allows the selection of a 3D defined volume. One can show that a confocal microscope can image structures in 3D with a volume resolution of 0.1 fl. This system achieves 3D resolution, at the cost of obtaining fluorescence signal from only a single point in the specimen. It is necessary to raster scan the excitation focus to cover a 3D volume. Confocal microscopy has been used extensively to investigate microstructures in cells and in the imaging of tissues (19) (Fig. 7).

Two-Photon Fluorescence Microscopy. A two-photon microscope is an alternative to confocal microscopy for the 3D imaging of thick specimens. Denk, Webb, and coworkers in 1990 introduced two-photon excitation microscopy (18,20). Fluorophores can be excited by the simultaneous absorption of two photons each having one-half of the energy needed for the excitation transition. Since the twophoton excitation probability is significantly less than the one-photon probability, two-photon excitation occurs only at appreciable rates in regions of high temporal and spatial photon concentration. The high spatial concentration of photons can be achieved by focusing the laser beam with a high numerical aperture objective to a diffraction-limited spot. The high temporal concentration of photons is made possible by the availability of high peak power pulsed lasers (Fig. 8). Depth discrimination is the most important feature of multiphoton microscopy. In the two-photon case, >80% of the total fluorescence intensity comes from a 1 mm thick region about the focal point for objectives with numerical aperture of 1.25. For a 1.25 NA objective using excitation wavelength of 960 nm, the typical point spread function has a fwhm of 0.3 mm in the radial direction and 0.9 mm in the axial direction (Fig. 8). Two-photon microscopy has a number of advantages compared with confocal imaging: (1) Since a two-photon microscope obtains 3D resolution by limitation the region of excitation instead of the region of detection as in a confocal system, photodamage of biological specimens is restricted to the focal point. Since out-of-plane chromophores are not excited,

they are not subject to photobleaching. (2) Two-photon excitation wavelengths are typically redshifted to about twice the one-photon excitation wavelengths in the IR spectral range. The absorption and scattering of the excitation light in thick biological specimens are reduced. (3) The wide separation between the excitation and emission spectra ensures that the excitation light and Raman scattering can be rejected without filtering out any of the fluorescence photons. An excellent demonstration of the ability of two-photon imaging for deep tissue imaging is in the neurobiology area (21) (Fig. 9).

Total internal reflection microscopy. Confocal and twophoton microscopy can obtain 3D resolved images from specimens up to a few hundred micrometers in thickness. However, both types of microscopy are technically challenging, require expensive instrumentation, and only can acquire data sequentially from single points. Total internal reflection microscopy (TIRM) is an interesting alternative if 3D-resolved information is only required at the bottom surface of the specimen, such as the basal membrane of a cell (22–24). Total internal reflection occurs at an interface between materials with distinct indices of refraction (Fig. 10). If light ray is incident from a high index prism, n2, toward the lower index region, n1, at an angle u, the light will be completely reflected at the interface if u is >uc, the critical angle.

n1

 

sin uc ¼ n2

(17)

While the light is completely reflected at the interface, the electric field intensity right above the interface is nonzero, but decays exponentially into the low index medium. The decay length of the electric field is on the order of tens to hundreds of nanometers. Compared with other forms of 3D resolved microscopy, TIRM allows the selection of the thinnest optical section, but only at the lower surface of the sample. While prism launch TIRM as described is simpler to construct, the bulky prism complicates the routine use of TIRM for cell biology studies. Instead, ultrahigh numerical aperture objectives have been produced (1.45–1.6 N). Light rays focus at the back aperture plane of the objective that are sufficiently off axis will emerge collimated, but at an oblique angle. If a specimen grown on a high index coverglass is placed upon the objective, total internal reflection can occur at the speci- men-coverglass interface if the oblique angle is sufficiently large. This approach has been described as the objective launch TIRM and has been very successful in the study of exocytosis processes (23) (Fig. 11).

FLUORESCENT PROBES

Fluorescence microscopy has found many applications in biomedicine. This wide acceptance is a direct result of the availability of an ever growing set of fluorescence probes designed to measure cell and tissue structure, metabolism, signaling processes, gene expression, and protein distribution (25,26). The synthesis of fluorescent probes dates back to 1856, when William Perkin made the first synthetic

MICROSCOPY, FLUORESCENCE

495

Figure 8. Two-photon microscopy optically sections and produces a fluorescent signal originating only from the focal point (a) the geometry of two-photon fluorescence. In traditional one-photon excitation, fluorescence is generated throughout the double inverted cones (blue arrow). Two-photon excitation generates fluorescence only at the focal point (red arrow). (b) The submicron PSF of twophoton excitation at 960 nm: The full-widths at half maximum (fwhm) are 0.3 mm radially and 0.9 mm axially. (c) An experimental visualization of the small excitation volume of two-photon fluorescence. Oneand two-photon excitation beams are focused by two objectives (equal numerical aperture) onto a fluorescein solution. Fluorescence is generated all along the path in the one-photon excitation case (blue arrow), whereas fluorescence is generated only in a 3D confined focal spot for two-photon excitation (red arrow) The reduced excitation volume is thought to lead to less photodamage. (Please see online version for color figure)

probe from coal tar dye. Thereafter, many more synthetic dyes became available: pararosaniline, methyl violet, malachite green, safranin O, methylene blue, and numerous azo dyes. While most of these early dyes are weakly fluorescent, more fluorescent ones based on the xanthene and acridine heterocyclic ring systems soon became available.

Optical Factors in the Selection of Fluorescent Probes

Before providing a survey of the wide variety of fluorescent probes, it is important to first discuss the optical properties of fluorescent probes that are important for microscopic imaging: extinction coefficient, quantum yield, fluorescent lifetime, photobleaching rate, and spectral characteristics.

One of the most important characteristic of a fluorescent probe is its extinction coefficient. Extinction coefficient, e, measures the absorption probability of the excitation light by the fluorophore. Consider excitation light is transmitted through a solution containing fluorophore at concentration c with a path length l. The light intensities before and after the solution are I0 and I. The extinction coefficient can then

be defined by Beer’s law:

 

 

I0

¼ ecl

 

log10 I

(18)

Fluorescent probes with high extinction coefficients can be excited by lower incident light intensity allowing the use of lowest cost light sources and reducing the background noise of the images originated from scattered excitation light.

Quantum yield, Q, measures the relative contributions of the radiative versus nonradiative decay pathways. High quantum efficiency maximizes the fluorescent signal for each photon absorbed. The combination of probe extinction coefficient and quantum efficiency quantifies the total conversion efficiency of excitation light into fluorescent signal.

While e and Q determines excitation light conversion efficiency, the maximum rate of fluorescent photon generation also depends on the lifetime, t, of the probe. Since a molecule that has been excited cannot be reexcited until it returns to the ground state, fluorescent lifetime

496 MICROSCOPY, FLUORESCENCE

Figure 9. Fluorescein dextran labeled blood vessels in the primary vibrissa cortex of a living rat brain imaged using twophoton microscope down to a depth of >500 mm, which demonstrates the ability of this technique to image deep into tissue (21).

D

L2

BR

 

L1

 

P

Θ

 

Figure 10. The configuration of a total internal reflection fluorescence microscope. L1 and L2 are objective and tube lens, respectively. The barrier filter is BR and the detector is D. The prism is P. The excitation light (green) is incident up the prism at angle, u, greater than the critical angle. The excitation light is totally internally reflected from the surface. A magnified view is shown on the left. The evanescence electric field induced by the excitation light above the prism surface decays exponentially and only induces strong fluorescence signal for probes close to the surface of the prism. Please see online version for color figure.

Figure 11. Epi-illuminated wide field (EPI) and total internal reflection (TIR) microscopy of bovine chromaffin cells containing secretory granules marked with GFP atrial naturetic protein (23). Only the lower plane of the cells contributes to the fluorescence signal in TIR set-up.

determines the rate at which a single probe molecule can be recycled. In general, for fluorescent probes with equal e and Q, fluorescent photon production rate is an inverse function of probe lifetime. Further intersystem cross-rate also plays a role in determining photon generation rate. Since the triplet state has a very long lifetime, probes with high intersystem cross-rates are trapped in the triplet state with a relatively lower photon generation rate.

Photobleaching rate measures the probability that a probe will undergo an excited-state chemical reaction and become nonfluorescent irreversibly. Therefore, the photobleaching rate of a probe limits the maximum number of fluorescence photons that can be produced by a single fluorophore. Photobleaching rates of fluorophores vary greatly. For example, rhodamine can survive up to 100,000 excitation, fluorescein a few thousand, and tryptophan can only sustain a few excitation events. Photobleaching can be caused by a variety of processes. Generally, it is the result of a photochemical reaction in the excited state of the probe. For example, a common bleaching pathway is the generation of a triplet state that reacts with oxygen dissolved in solution to generate singlet oxygen and an oxidized molecule incapable of undergoing the same electronic transition as before.

Spectral properties are also important in probe selection for a number of reasons. First, selecting fluorescent probes with well-separated excitation and emission spectra allow more efficient separation of the fluorescence signal from the excitation light background. Second, fluorescent probes should be selected to match the detector used in the microscope that may have very different sensitivity across the

spectral range. For example, most PMTs have maximum efficiency in the green spectral range, but very low efficiency in the red. Therefore, green emitting probes are often better matches for microscopes using PMTs as detectors. Third, probes with narrow emission spectra allow a specimen to be simultaneously labeled with different colors providing a method to analyze multiple biochemical components simultaneously in the specimen.

Classification of Fluorescent Probes

There is no completely concise and definitive way to classify the great variety of fluorescent probes. A classification can be made based on how the fluorophores are deployed in biomedical microscopy: intrinsic probes, extrinsic probes, and genetic expressible probes.

Intrinsic Probes. Intrinsic probes refer to the class of endogenous fluorophores found in cells and tissues. Many biological components, deoxyribonuclic acid such as (DNA), proteins, and lipid membrane are weakly fluorescent. For example, protein fluorescence is due to the presence of amino acids: tryptophan, tyrosine, and phenylalanine. Among them, tryptophan is the only member with marginal quantum yield for microscopic imaging. However, fluorescent imaging based on tryptophan provides very limited information due to the prevalence of this amino acid in many proteins distributed throughout cellular systems and provides no specificity or contrast. The most useful intrinsic probes for microscopy imaging are a number of enzymes and proteins, such as reduced pyridine nucleotides [NAD(P)H], flavoproteins, and protoporphyrin IX. Both NAD(P)H and favoproteins are present in the cellular redox pathway. The NAD(P)H becomes highly fluorescent when reduced, whereas flavoprotein becomes fluorescent when oxidized, while their redox counterparts are nonfluorescent. These enzymes thus provide a powerful method to monitor cell and tissue metabolism. Protoporphyrin IX (PPIX) is a natural byproduct in the heme production pathway that is highly fluorescent. Certain cancer cells have been shown to have upregulate PPIX production relative to normal tissue and may be useful in the optical detection of cancer. Another class of important intrinsic fluorophores includes elastin and collagen, which resides in the extracellular matrix allowing structural imaging of tissues. Finally, natural pigment molecules, such as lipofuscin and melanin, are also fluorescent and have been used in assaying aging in the ocular system and malignancy in the dermal system respectively.

Extrinsic Probes. Many extrinsic fluorescent probes have been created over the last century. A majority of these extrinsic fluorophores are small aromatic organic molecules (25–28). Many probe families, such as xanthenes, canines, Alexas, coumarines, and acrinides have been created. These probes are designed to span the emission spectrum from near UV to the near-IR range with optimized optical properties. Since these molecules have no intrinsic biological activity, they must be conjugated to biological molecules of interest, which may be proteins or structure components, such as lipid molecules.

MICROSCOPY, FLUORESCENCE

497

Most common linkages are through reactions to amine and thiol residues. Reactions to amine are based on acylating reactions to form carboxamides, sulfonamides, or thioureas. Targeting thiol residue in the cysteines of proteins can be accomplished via iodoacetamides or maleimides. Other approaches to conjugate fluorophores to biological components may be based on general purpose linker molecules, such as biotin-avidin pairs or based on photoactivable linkers. A particularly important class of fluorophores conjugated proteins is fluorescent antibodies that allow biologically specific labeling.

In addition to maximizing the fluorescent signal, the greater challenge in the design of small molecular probes is to provide environmental sensitivity. An important class of environmentally sensitive probes distinguishes the hydrophilic versus hydrophobic environment and results in a significant quantum yield change or spectral shift based on solvent interaction. This class of probes includes DAPI, laurdan, and ANS, which have been used to specifically label DNA, measure membrane fluidity, and sense protein folding states, respectively. Another important class of environmentally sensitive probes senses intracellular ion concentrations, such as pH, Ca2þ, Mg2þ, Zn2þ. The most important members of this class of probes are calcium concentration sensitive because of the importance of calcium as a secondary messenger. Changes in intracellular calcium levels have been measured by using probes that either show an intensity or a spectral response upon calcium binding. These probes are predominantly analogues of calcium chelators. Members of the Fluo-3 series and Rhod-2 series allow fast measurement of the calcium level based upon intensity changes. More quantitative measurement can be based on the Fura-1 and Indo-1 series that are ratiometric. These probes exhibit a shift in the excitation or emission spectrum with the formation of isosbestic points upon calcium binding. The intensity ratio between the emission maxima and the isosbestic point allows a quantitative measurement of calcium concentration without influence from the differential partitioning of the dyes into cells.

Quantum dots belong to a new group of extrinsic probes that are rapidly gaining acceptance for biomedical imaging due to a number of their very unique characteristics (29– 31). Quantum dots are semiconductor nanoparticles in the size range of 2–6 nm. Photon absorption in the semiconductor results in the formation of an exciton (an electronhole pair). Semiconductor nanoparticles with diameters below the Bohr radius exhibit strong quantum confinement effect, which results in the quantization of their electronic energy level. The quantization level is related to particle size where smaller particles have a larger energy gap. The radiative recombination of the exciton results in the emission of a fluorescence photon with energy corresponding to the exciton’s quantized energy levels. The lifetime for the recombination of the exciton is long, typically on the order of a few tens of nanoseconds. Quantum dots have been fabricated from II–VI (e.g., as CdSe, CdTe, CdS, and ZnSe) and III–V (e.g., InP and InAs) semiconductors. Due to the compounds involved in the formation of these fluorescent labels, toxicity studies have to be realized prior to any experiments. Recent research works have been devoted

498 MICROSCOPY, FLUORESCENCE

to the better manufacture of these semiconductor crystals including methods to form a uniform crystalline core and to produce a surface capping layer that enhances the biocompability of these compounds, prevents their aggregation, and can maximize their quantum efficiency. Furthermore, coating the surface of quantum dots with convenient functional groups, including common linkages, such as silane or biotin, has been accomplished to facilitate linkage to the biological molecules. Quantum dots are unique in their broad absorption spectra, very narrow ( 15 nm) emission spectrum, and extraordinary photostability. In fact, quantum dots have been shown to have photobleaching rates orders of magnitude below that of organic dyes. Quantum dots also have excellent extinction coefficients and quantum yield. While there are significant advantages in using quantum dots, they also have a number of limitations including their relative larger size compared with organic dyes and their lower fluorescence flux due to their long lifetime. Quantum dots have been applied for single receptor tracking on cell surface and for the visualization of tissue structures, such as blood vessels.

Genetic Expressible Probes. The development of genetically expressible probes has been rapid over the last decade (32). The most notable of these genetic probes is green fluorescent protein, GFP (33). The GFP was isolated and purified from the bioluminescent jellyfish Aequorea Victoria. Fusion proteins can be created by inserting GFP genes into an expression vector that carries a gene coding for a protein of interest. This provides a completely noninvasive procedure and perfectly molecular specific approach to track the expression, distribution, and trafficking of specific proteins in cells and tissues. In order to better understand protein signaling processes and protein– protein interactions, fluorescent proteins of different colors have been created based on random mutation processes. Today, fluorescent proteins with emission spanning the spectral range from blue to red are readily available. Expressible fluorescent proteins that are sensitive to cellular biochemical environment, such as pH and calcium, have also been developed. Novel fluorescent proteins with optically controllable fluorescent properties, such as photoactivatable fluorescent proteins, PA-GFP, photoswitchable CFP, and pKindling red have been created and may be used in tracing cell movement or protein transport. Finally, protein–protein interactions have been detected based on a novel fluorescent protein approach in which each of the interacting protein pairs carries one-half of a fluorescent protein structure that is not fluorescent. Upon binding of the protein pairs, the two halves of the fluorescent protein also recombine, which results in a fluorescent signal.

ADVANCED FUNCTIONAL IMAGING MODALITIES AND THEIR APPLICATIONS

A number of functional imaging modalities based on fluorescent microscopy have been developed. These techniques are extremely versatile and have found applications ranging from single molecular studies to tissue level experiments. The implementation of the most common imaging

modalities will be discussed with representative examples from the literature.

Intensity Measurements

The most basic application of fluorescence microscopy consists in mapping fluorophore distribution based on their emission intensity as a function of position. However, this map is not static. Measuring intensity distribution as a function of time allows one to follow the evolution of biological processes. The fastest wide-field detectors can have a frame rate in the tens of kilohertz range, unfortunately at the expense of sensitivity and spatial resolution. They are used to study extremely fast dynamics, such as membrane potential imaging in neurons. For 3D imaging, point scanning techniques are typically slower than widefield imaging, but can reach video rate speed using multifoci illumination.

Dynamic intensity imaging has been used at the tissue level to follow cancer cells as they flow through blood vessels and extravasate to form metastases, or in embryos to track the expression of a regulatory protein at different developmental stages. One commonly used technique to follow the movements of protein in cellular systems is fluorescent recovery after photobleaching (FRAP). In FRAP studies, a small area of a cell expressing a fluorescently labeled protein is subjected to an intense illumination that photobleaches the dye and leads to a drastic drop in fluorescence intensity. The rate at which the intensity recovers provides a measure of the mobility of the protein of interest.

An important aspect of the fluorescent microscopy technique lies also in the image analysis. Particle tracking experiments are an excellent example. Zuhang and coworkers (34) studied the infection pathway of the influenza virus labeled with a lipophilic dye in CHO cells. Each frame of the movie recorded was analyzed to extract the position of the virus particles with 40 nm accuracy. Three different stages of transport after endocytosis of the virus particle were separated, each involving different transport mechanisms transduced by a different protein as shown on Fig. 12. The first stage is dependant on actin and results in an average transport distance of 2 mm from the initial site of binding at the cell periphery. The second stage is characterized by a sudden directed displacement that brings the virus close to the nucleus with a speed of 1–4 mm s 1 that is consistent with the velocity of dynein motors on microtublues. The last stage consists of back and forth motion in the perinuclear region. This is followed by the fusion of the endosome with the virus and the liberation of the genetic material. This event is identified by a sudden increase in the fluorescence intensity due to the dequenching of the fluorescent tags on the virus.

Spectral Measurements

An extremely important feature of fluorescent microscopy is the ability to image many different fluorescent species based on their distinct emission spectra. Dichroic bandpass filters optimized for the dyes used in the experiment can discriminate efficiently between up to four or five different fluorophores.

MICROSCOPY, FLUORESCENCE

499

Figure 12. Particle tracking of virus infecting a cell. (a) Trajectory of the virus. The color of the trajectory codes time from 0 s (black) to 500 s (yellow). The star indicates the fusion site of the virus membrane with the vesicle. (b) Time trajectories of the velocity (black) and fluorescence (blue) of the virus particle (34). Please see online version for color figure.

In a study of connexin trafficking, Ellisman and coworkers (35) used a dual labeling scheme to highlight the dynamics of these proteins. Using a recombinant protein fused to a tetracystein receptor domain, the connexin was stably labeled with a biarsenical derivate of fluorescein or resorufin (a red fluorophore). The cells expressing these modified proteins were first stained with the green fluorophore and incubated 4–8 h. The proteins produced during this incubation period are fluorescently tagged in a second staining step with the red fluorophore. The two-color images highlight the dynamics of the connexin refurbishing at the gap junction. As shown on Fig. 13, the older proteins are found in the center and are surrounded by the newer proteins.

For wide-field fluorescence imaging using a CCD camera, spectral information is collected sequentially while position information is collected at once. Bandpass filters can be inserted to select the emission wavelength in between image frames. This procedure is relatively slow and can result in image misregistration due to the slight misalignment of the filters. This problem can be overcome by the use of electronically tunable filter. Two types of electronically tunable filters are available based either on liquid-crystal technology or on electrooptical crystals. Liquid-crystal tunable filters are made of stacks of birefringent liquid-crystal layers sandwiched between polarizing filters. Polarized light is incident upon the device. The application of a voltage on the liquid-crystal layer produces a wavelength dependent rotation of the polarization of light as the light is transmitted through the liquid-crystal layers. After cumulative rotations through the multiple layers, only the light at a specific spectral range is at the correct polarization to pass through the final polarizer without attenuation. The second type is acoustooptics tunable filters (AOTFs). An AOTF works by setting acoustic vibration at radio frequency (rf) through an electrooptical crystal to create a diffraction grating that singles out the appropriate wavelength with a few nanometer bandwidth. The main advantage of AOTF is that the wavelength selection is realized by tuning the acoustic wave frequency, which can be done in a fraction of a millisecond while the liquid-crystal tunable filters operate with a time constant of hundreds of milliseconds. The latter, however, have a larger clear aperture and selectable bandwidth ranging

from a fraction of a nanometer up to tens of a nanometer. Liquid-crystal filters are more often used for emission filtering while the acoustooptic filters are more commonly used for excitation wavelength selection.

Typical emission spectra from molecular fluorophores have a sharp edge at the blue end of the spectrum, but have a long tail extending far into the red due to electronic relaxation from the excited state into a vibrationally excited ground state. When imaging with a few color channels, where each channel represents a single chromophore, one has to be careful to take into account the spectral bleedthrough of each dye into the neighboring channels. Collecting signal in a larger number of channels allows the use of a linear unmixing technique to account for the real shape of the emission spectra of each dye and accounts more precisely for their contributions in each pixel of the image. This technique can be implemented using tunable filters with a narrow bandwidth and CCD camera detectors. It has also been shown that an interferometer can be used to encode the spectral information in the image on the CCD camera. An image is then recorded for each step of the interferometer and a Fourier transform analysis allows the recovery of the spectral information. Although it requires more advanced postprocessing of the image data, this approach offers a large spectral range and a variable spectral resolution unmatched by the tunable filters.

Figure 13. Connexin trafficking at gap junction. The newly produced proteins are labeled in red after 4 h (a and b) or 8 h (c and d) hours after the first staining step with green. The older proteins occupy the periphery of the gap junction, while the new ones are localized in its center (35). Please see online version for color figure.

500 MICROSCOPY, FLUORESCENCE

In scanning systems, such as confocal microscopes, the use of dichroic beamsplitters can be readily constructed to simultaneously resolve two or three spectral channels in parallel at each scan position. If more spectral channels are desired for spectral decomposition measurement, the emission can be resolved in a multichannel detector using a grating or a prism to separate the different wavelength components. This has been used to separate the contribution of dyes with very similar emission spectra like GFP and fluorescein, or resolve the different intrinsic fluorophores contained in the skin where many fluorophores with overlapping spectra are present.

A particularly promising class of probes for spectral imaging are quantum dots. As discussed previously, the emission spectra of quantum dots are very narrow and can be tuned by changing their size. Further, all quantum dots have a very broad excitation spectrum and a single excitation wavelength larger than the band gap energy can lead to the efficient excitation of many different colored quantum dots simultaneously. In their report, Simon and coworkers (36) used these particles to track metastatic cells injected in the tail of a mouse as they extravasate into lung tissue. Using spectrally resolved measurements, they demonstrate their ability to recognize at least five different cell populations each labeled with different quantum dots. Figure 14 shows an image of cells labeled with different quantum dots and the emission spectra from each of these particles. The difference in emission spectra allows an easy identification of each cell population.

Lifetime Resolved Microscopy

Measurement of the fluorescence lifetime in a microscope provides another type of contrast mechanism and can be used to discriminate dyes emitting in the same wavelength range. It is also commonly used to monitor changes in the local environment of a probe measuring the pH or the concentration of cations In situ. The fluorescence lifetime can be shortened by interaction of the probe with a quencher, such as oxygen. Another type of quenching is induced by the presence of the transition dipole of other dyes, which are in close vicinity and lifetime measurements can be used to quantify energy-transfer processes (discussed further in a later section).

There are two methods to measure the fluorescence lifetime in a microscope. One is in the time domain and the other is in the frequency domain. In the time domain, a light pulse of short duration excites the sample and the decay of the emission is timed. The resulting intensity distribution is a convolution between the instrument response and the exponential decay of the fluorophore.

 

t

Gðt TÞ exp

T

dT

 

IðtÞ ¼ I0

Z0

 

(19)

t

In the frequency domain, the excitation light is modulated at frequency v. The intrinsic response time of the fluorescence acts as a low pass filter and the emitted signal is phase shifted and demodulated. Both the demodulation and the phase shift can be linked to the fluorescence lifetime.

Df ¼ a tanðvtÞ

(20)

1

M ¼ p (21) 1 þ v2t2

In order to obtain a measurable phase shift and modulation, the frequency has to be on the same order of magnitude as the lifetime (i.e., 108 Hz). However, it is difficult to measure these two parameters at such high frequencies. Therefore, one typically uses a heterodyne detection to lower the frequency to the kilohertz range by modulating the detector at a frequency close to the excitation frequency.

For wide-field microscopy, an image intensifier is placed in front of the CCD camera to modulate the gain of detection. In the time domain, a short time gate is generated to collect the emission at various times after the excitation. In the frequency domain, the image intensifier is modulated at high frequencies and a series of images at different phases are acquired. In laser scanning confocal and multiphoton microscopes, time correlated single-photon counting is the method of choice for lifetime measurements in the time domain because it offers an excellent signal/noise ratio at low light levels. Every time a photon is detected, the time elapsed since the excitation of the sample is measured. A histogram of all the times of arrival yields a decay curve of the fluorescence in each pixel of the image. For brighter

Figure 14. Spectral imaging of cells labeled by quantum dots. Cells were labeled with five different quantum dots and imaged in a multiphoton microscope. Each symbol represents a different quantum dot. The symbols on the image match the emission spectra seen on the graph. The spectral imaging set-up allows to discriminate between the different cell populations (36).

MICROSCOPY, FLUORESCENCE

501

Figure 15. Quantification of the pH of the skin by lifetime imaging. (a) Intensity, (b) modulation, (c) lifetime, and (d) pH maps of a mouse skin at different depth. The lifetime measurements allow a determination of the pH independently of the intensity changes recorded between different imaging depth (37).

samples, a frequency domain approach using modulated detectors can also be used to measure the lifetime.

To measure the pH in the skin of a mouse, Clegg and coworkers (37) used a modified fluorescein probe whose lifetime varies from 2.75 ns at pH 4.5 to 3.9 ns at pH 8.5. As they image deeper in the skin, they observe that the average pH increases from 6.4 at the surface up to >7 at 15 mm depth. The extracellular space is mostly acidic (pH 6), while the intracellular space is at neutral pH. Typically, pH is measured in solution by correlating fluorescence intensities with specific pH levels. This approach is not suitable for tissues, as in the skin, since the dye is unevenly distributed throughout the tissue (Fig. 15) due to differential partitioning. A measurement of pH based on fluorescence lifetime is not dependent on probe concentration and thus the pH can be measured in the intra and extracellular space at various depths in the skin.

Polarization Microscopy

Polarization microscopy is a technique that provides information about the orientation or the rotation of fluorophores. Linearly polarized excitation light results in preferential excitation of molecules with their transition dipole aligned along the polarization. If the molecule is in a rigid environment, the emitted fluorescence will mostly retain a polarization parallel to the excitation light. However, if the molecule has time to rotate before it emits a photon, this will randomize the emission polarization. The anisotropy r is a ratio calculated from the intensity parallel Ik and perpendicular I? to the incident polarization and is a measure of the ability of the molecule to rotate.

Ik I?

 

r ¼ Ik þ 2I ?

(22)

This ratio is mostly governed by two factors, which are the fluorescence lifetime t and the rotational correlation time u.

r ¼

r0

(23)

1 þ ðt=uÞ

where r0 is the fundamental anisotropy. Molecules with a short fluorescence lifetime and a long rotational correlation time (t < u) will have a high anisotropy. In the opposite case, where molecules can freely rotate during the time they reside in the excited state, the anisotropy will be low. An approximate measurement of the mobility of a molecule can be obtained by exciting the sample at different polarization angles. A proper measurement of the anisotropy requires both a linearly polarized excitation light source and the detection of the parallel and perpendicular component of the fluorescence light using a polarizer. This technique has been used to measure viscosity and membrane fluidity In vivo. It has been applied to quantify enzyme kinetics, relying on the fact that the cleavage of a fluorescently labeled substrate leads to a faster tumbling and thus a decrease in anisotropy.

Goldstein and co-workers (38) used polarization microscopy at the single-molecule level to study the orientation of the kinesin motor on a microtubule. A thiol reactive rhodamine dye was attached to cysteines on the motor protein. Microtubules decorated with the modified kinesin were imaged under a different polarization angle. In the presence of adenosine monophosphate (AMP)–(PNP) [a nonhydrolyable analogue of adenonine triphosphats (ATP)], the fluorescence intensity depends strongly on the angle of polarization of the excitation light (Fig. 16) proving that the kinesin maintains a fixed orientation. In the presence of adenonine5–diphosphate (ADP), however, the anisotropy is lower (no dependence on excitation polarization angle),

502 MICROSCOPY, FLUORESCENCE

Figure 16. Mobility of single kinesin motors on microtubules probed by polarization microscopy. (a) Image of microtubules sparsely decorated with kinesin motors in presence of AMP– PNP and ADP. (b) Time course of the fluorescent intensity recorded from single molecule excited with linearly polarized light at four different angles. The large fluctuations of the fluorescence intensity as function of the excitation polarization in the AMP–PNP case demonstrate the rigidity of the kinesin motor on the microtubule (38).

leading to the conclusion that the kinesin is very mobile, while still attached to the microtubule.

Fluorescence Resonance Energy Transfer

Fo¨rster resonance energy transfer (FRET) is a technique used to monitor interaction between two fluorophores on the nanometer scale. When a dye is promoted to its excited state, it can transfer this electronic excitation by dipole– dipole interaction to a nearby molecule. Due to the nature of this interaction, Fo¨rster predicted a dependence of the FRET efficiency on the sixth power of distance that was demonstrated experimentally by Stryer with a linear polypeptide of varying length (39). The efficiency E of the process varies as function of the distance, R, between the two molecules as

R6

 

0

 

E ¼ R06 þ R6

(24)

Where R0 is called the Fo¨rster distance, which depends on Avogadro’s number NA, the index of refraction of the medium n, the quantum yield of the donor molecule QD, the orientation factor k, and the overlap integral J.

R6

¼

9000 lnð10Þk2QD

J

(25)

128p5NA n4

0

 

 

k represents the relative orientation of the transition dipole of the donor and acceptor molecules. In most cases, a random interaction is presumed and this factor is set to two-thirds. The overlap integral J represents the energy overlap between the emission of the donor and the absorption of the acceptor. For well-matched fluorophore pairs, R0 is on the order of 4–7 nm.

Most FRET experiments are based on the measurement of the intensity of the donor and of the acceptor because the presence of FRET in a system is characterized by a decrease in the emission of the donor and an increase in the acceptor signal. Thus, in principle, a two color channel microscope is sufficient to follow these changes. However, experimental artifacts, such as concentration fluctuation and spectral bleed, complicate the analysis of these images and many different correction algorithms have been developed.

FRET measurements have been used in molecular studies to measure distances and observe dynamic conformational changes in proteins and ribonuclic acid (RNA). In cellular studies, FRET is often used to map protein interactions. By labeling one protein with a donor dye and its ligand with an acceptor dye, energy transfer will occur only when the two proteins are bound such that the dyes come in close proximity of each other.

The addition of fluorescence lifetime imaging provides the additional capability of retrieving the proportion of fluorophore undergoing FRET in each pixel of an image independently of concentration variations. This is possible because the fluorescence lifetime of a FRET construct is shorter than the natural decay of the dye. Thus if one has a mixture of interacting and free protein, fitting a double exponential to the fluorescence decay allows us to retrieve the proportion of interacting protein. This has been applied by Bastiaens and co-workers (40) to the study the phosphorylation of the EGF receptor ErB1. The transmembrane receptor is fused with a GFP protein. The phosphrylation of the protein is sensed by an antibody labeled with a red dye (Cy3). When the Erb1 is phosphorylated, the antibody binds to the protein and FRET occurs due to the short distance between the antibody and the GFP. The ErB1 receptors can be stimulated by EGF coated beads leading to phosphorylation and FRET. The time course of the stimulation is followed and for each cell and the fraction of phosphorylated receptors at various time interval is shown in Fig. 17. After 30 s, the FRET events are localized at discrete locations. But after 1 min, the whole periphery of the cell displays high FRET, demonstrating the lateral signaling of the receptor after activation at discrete location by EGF coated beads.

Figure 17. Time course of the phosphorylation of EGF receptors ERB1 after stimulation by EGF coated beads observed by FRET between a GFP modified EGF receptor and a phosphorylation antibody labeled with Cy3. While the GFP intensity is remains relatively constant, the concentration of the Cy3 tagged antibody clearly increases after the stimulation. This leads to an increase FRET signal as function of time (40).