- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Prerna Sethi and Hilary W. Thompson
background to grow and when they converge, the points of contact define the boundary.
9.1.2.3. Edge detection
Edge detection is a feature-extraction technique that seeks to provide the information on the location of the regions in the image where the intensity changes sharply or discontinuities are detected. If a pixel lies on the boundary of the object, then the neighborhood pixels will show a region of transition. The two main characteristics of the edge detection operators are slope and direction. The points lying on an edge can be detected by:
(1) detecting local maxima or minima of the first derivative or (2) detecting the zero crossing of the second derivative. Edge detection methods are classified broadly in three categories: first-order derivative (gradient), second-order derivative, and optimal edge detection.
9.1.2.3.1. The first-order derivative (gradient) methods
The first-order derivative methods have kernel operators that calculate the strength of the slope in the vertical or horizontal direction. Consequently, the edge strength is calculated as an aggregate of the different components of the slope. The Prewitt,25 Roberts,26 and Sobel27 operators are classified as the first-order derivative methods.
The Prewitt edge operator measures the horizontal edge component using kernel Kx, and the vertical edge component using kernel, Ky. The gradient intensity of the current pixel is calculated by |Kx| + |Ky |. The Prewitt edge detector operator is easy to implement and is less computationally intensive than other methods. However, one of the major drawbacks of the Prewitt edge detector operator is its sensitivity to noise.24
The Roberts edge operator is a local differential operator used for detecting edges. The mathematical formulation is given as:
g(x, y) = I(r, c) − I(r + 1, c + 1 2 |
|
+ I(r + 1, c) − I(r, c + 1) 2 1/2, |
(9.7) |
where, I(r, c) is the input image with the pixel coordinates (r, c). The Roberts edge operator marks the edge points in an image, providing no
288
Automated Microaneurysm Detection in Fluorescein Angiograms for Diabetic Retinopathy
information about the orientation of the edge is available. The results have indicated that this method works best with binary images.28
The Sobel edge operator involves two convolution kernels denoted by g(x) and g(y), where
g(x) |
−2 |
0 |
2 |
and g(y) |
−0 |
−0 |
−0 |
. (9.8) |
|
1 |
0 |
1 |
|
1 |
2 |
1 |
|
|
= −1 |
0 |
1 |
|
= 1 |
2 |
1 |
|
|
− |
|
|
|
|
|
|
|
The convolution kernels smooth the image and, hence, are less prone to noise. However, the edge localization is poor since it produces thicker edges.27
9.1.2.3.2. The second-order derivative methods
The second-order derivative methods search for zero crossings to find edges in an image, which are computed from the second-order derivative expression.29,30 The Laplacian, Laplacian of Gaussian, and difference of Gaussian (DoG) methods are classified as second-order derivative methods.
The Laplacian of an input image I denoted by f(x, y) is defined as,
f(x, y) = |
∂2f(x, y) |
+ |
∂2f(x, y) |
(9.9) |
|
|
|
. |
|||
∂x2 |
∂y2 |
||||
The Laplacian operator is usually susceptible to noise and requires filtering. The Laplacian of Gaussian method is also known as the Marr-Hildreth edge detector. It is defined as,
|
= − |
1 |
|
|
− |
x2 y2 |
|
x2 |
+y2 |
|
|
πσ4 |
|
2σ2 |
|
|
|
||||
LoG(x, y) |
|
|
|
1 |
|
+ |
e− 2σ2 . |
(9.10) |
||
|
|
|
|
|
||||||
The value of σ dictates the value of the Gaussian filter. The broader the Gaussian filter, the more smoothening is performed. However, the LoG is computationally intensive. The DoG can approximate the LoG. The LoG can be approximated by the difference of two Gaussians, DoG. The difference of the Gaussian also called the Mexican hat operator is defined as,
|
− |
x2+y2 |
|
− |
x2+y2 |
|
|
|
2πσ2 |
|
2πσ2 |
|
|||
DoG(x, y) = |
e |
1 |
− |
e |
2 |
. |
(9.11) |
2πσ12 |
2πσ22 |
||||||
289
Prerna Sethi and Hilary W. Thompson
Here, the width of the edge can be adjusted by changing the values of σ1 and σ2.
9.1.2.3.3. The optimal edge detector
The Canny edge detection31 is one of the most popular edge detection techniques. The algorithm utilizes an optimal edge detector based on a set of criteria to achieve the following optimization constraints:
•Achieve a good localization to mark the edges as closely as possible to the actual edges,
•Mark the edges only once when a single edge exists to minimize the number of responses to a single edge (to help detect the true negatives, that is the nonedges are not marked), and
•Maximize the signal-to-noise ratio in detecting the true positives.
According to Canny, the optimal filter that meets the above three criteria can be approximated as the first derivative of Gaussian function defined as,
G(x, y) = |
|
1 x2+y2 |
(9.12) |
||||
|
|
e 2σ2 |
|
||||
2πσ2 |
|
||||||
∂G(x, y) |
x2+y2 |
|
∂G(x, y) |
x2+y2 |
(9.13) |
||
|
αxe 2σ2 |
|
|
|
|
αxe 2σ2 . |
|
∂x |
|
∂y |
|||||
|
|
|
|
||||
Hafez et al.32 have applied the Canny and Sobel edge detectors to find MAs in the fluorescein angiograms of the ocular fundus. They first detected the edges in the image using the Canny edge detector and then subtracted those segments that represented the vessel segments in the image. For the remaining objects, they calculated the edge threshold by computing the Sobel edge operator for each point in the object. The results obtained outperformed the Hough transform method in the computational time, as well as in detecting the number of false MAs.
We obtained a set of fluorescein angiographic images from the Louisiana State University Health Science Center, New Orleans and performed a series of preprocessing steps using a three-step preprocessing algorithm (Fig. 9.3.). First, we map low intensity pixels to high intensity values. The nonzero pixels of the resultant image correspond to the exudates or optic disc, and they are replaced by pixels with an average intensity value. This replacement aids in dismissing false alarms. Second, we smoothen the image for noise
290
