- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Computational Methods for Feature Detection in Optical Images
filling in the outermost areas. The kernel must have a total value of zero to ensure that the areas of constant gray values will return a zero edge value. The edge response of a typical LoG kernel is given in Fig. 2.12.
2.3.1.3. Canny edge detection
As with the LoG method, Canny edge detection begins by smoothing the image with a Gaussian function to reduce false edge detection from noise artifacts. The method then uses edge detection kernels (Prewitt or Sobel) in four-directions (0◦, 45◦, 90◦, and 135◦, for example) and finds a gradient magnitude and direction using the method explained in Sec. 2.3.1.1. A nonmaximal suppression step is then employed to determine whether the gradient magnitude is a local maximum in the gradient direction. A high and low threshold is also used to reduce large gradients due to noise. A high threshold is used to ensure the found edges are real, then using directional information from the gradients, edges are traced throughout the image, applying the lower threshold. This method produces a binary image of edge locations, with typically thin edge width due the non-maximal suppression step (Fig. 2.12).
2.3.2. Edge Linkage Methods for Boundary Detection
The boundary detection methods described in Secs. 2.3.1.1 through 2.3.1.3 provide useful initial edge information, but are not robust enough to segment retinal features successfully. The edge pixels are typically corrupted by image noise, illumination effects, and occlusion discontinuities, requiring additional methods to both disregard false edge data and link true feature edges to improve segmentation results. Edge linking methods can also provide the initial steps for segmentation by labeling autonomously linked edges together, thus separating edges from one another so that their structures can be compared to possible features. We will discuss several methods that use edge detection input data to enhance edge linkages for the increasing accuracy of retinal feature segmentation.
2.3.2.1. Local neighborhood gradient thresholding
A simple method to link similar gradients into a single-edge unit is accomplished by comparing magnitude and orientation values. With the domain
57
Michael Dessauer and Sumeet Dua
knowledge that retinal region boundary intensity is locally constant and the shape can be considered piecewise linear, we can use simple thresholding of the magnitude and orientation in a local neighborhood to connect like gradients to one another. We will use a gradient magnitude threshold equation
| f(x, y) − f(x0, y0)| ≤ E, |
(2.22) |
where (x, y) and (x0, y0) are located within some defined local neighborhood, N, and E is a nonnegative threshold value. The gradient angle, α, is found using the equation presented in Sec. 2.3.1, is also compared to the neighborhood
|α(x, y) − α(x0, y0)| < A, |
(2.23) |
where A is a nonnegative angle threshold. Before implementing the gradient similarity thresholds above, we assume that an initial edge threshold similar to those used in Secs. 2.3.1.1 through 2.3.1.3 has been used. This procedure is recursively repeated throughout the image at every pixel location, with an indexing step used to keep the track of edge labels. Each new edge linkage can be indexed by an integer, which can then be used to find the overall size of each edge. A final minimum/maximum size threshold can be used to erase smaller edges or extremely large edges.
Figure 2.13 displays the multiple steps in the local neighborhood-edge linkage algorithm.
As seen in Fig. 2.13, some edges remain unlinked and some nonedge artifacts remain linked.Additional linking methods can add additional enhancement for more accurate segmentation.
Fig. 2.13. (a) Original grayscale image, (b) image smoothed by Gaussian kernel, (c) edge response using Sobel edge detector, and (d) linked and labeled edges using local neighborhood gradient thresholding.
58
Computational Methods for Feature Detection in Optical Images
2.3.2.2. Morphological operations for edge link enhancement
Morphological steps that can increase edge linkages artificially dilate and erode edges using structuring elements defined by the user are commonly used for retinal image feature segmentation.14 A binary image created from an edge detection method will contain broken edges due to soft edges, noise, illumination, or occlusion. The broken edge location will not have similar gradient values to its edge members, thus, methods such as neighborhood gradient thresholding will fail to link the edge. We can use a dilation operation, followed with an erosion step to first link, then thin edges to refine edge structures.
A dilation operation can be defined in set theory as |
|
(B) |
(2.24) |
A B = {z|[ ˆ z ∩ A] A}, |
where A is the set of edge locations and B is a structuring element. The
dilation of A by B is the set of all displacements, z, such that ˆ |
and |
A |
overlap |
B |
|
|
by at least one element. Although this operation is similar to convolution, dilations are based on set operations, as opposed to the arithmetic operations of a convolution mask. We chose a structuring element intuitively, using domain knowledge of the edge structure we are attempting to link, to close gaps while not adding too many false edges. We show the results of edge dilation in Fig. 2.14 using a simple disk-structuring element.
The counterpart of dilation, erosion, is used to reduce edge elements that are created from noise and illumination effects or from a dilation operation.
The definition of erosion that we will use is |
|
A B = {z|(B)z A}, |
(2.25) |
which indicates that the erosion of A by B is the set of all points z such that B, translated by z, is contained in A. This method is typically used in tandem with dilation operations to either initially reduce edge artifacts or to thin edges. Performing erosion followed by a dilation is referred to as an opening operation, which tends to smooth contours, break narrow edges, and erase thin artifacts. A closing operation is a dilation followed by erosion, which tends to smooth edge contours and fuse narrow edges together. It is also common to perform several erosion and dilation steps using multiple structuring elements to enhance true edges while reducing
59
Michael Dessauer and Sumeet Dua
Fig. 2.14. Top: input binary edge image; left: erosion operation; left-center: dilation operation; right-center: opening operation; and right: closing operation.
false edge artifacts. An example of each step of the operation can be seen in Fig. 2.14.
More complex set-theoretic methods can yield better edge linkage and refinement results.8 Because edges can be described as a group of connected components, morphological operations provide useful enhancements to edge linkage challenges, although the parameter adjustment of structuring elements inhibits these methods from providing robust solutions to retinal image feature segmentation. Opening and closing operations have been used for both vasculature and microaneursym segmentation.15,16
2.3.2.3. Hough transform for edge linking
Unlike previous methods, which use local neighborhoods to increase true edge linkage and reduce false edge artifacts, the Hough transform uses a global processing method to link points. As the name implies, the points are transformed from spatial coordinates into a shape-space. This predetermined shape should resemble a feature boundary, such as a circle or ellipse for the optic disk.17 This method can provide boundaries (as shape in the transform) that are robust to noise and breaks in edges. In the case of the line equation transform, we use the normal representation of the line
x cos θ + y sin θ = ρ, |
(2.26) |
60
