- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Michael Dessauer and Sumeet Dua
filter response at each location (x, y). Although matched filters require high computation, the method can be parallelized for faster computation time.
We have detailed several methods for retinal image anatomy segmentation that have proven successful when coupled with proper preand postprocessing steps. Choosing a proper segmentation method should depend on specific domain knowledge, the level of accuracy required, and computational resources available, with no one specific method standing out as producing the best results for every situation. The retinal anatomy creates unique challenges for successful segmentation, due to “distracters” that include blood vessel occlusion, spatially varying albedo, the presence of pathologies such as lesions and cotton wool spots, and residual non-uniform illumination effects.1
2.4. Feature Representation Methods for Classification
Along with segmentation, discriminative feature extraction is a prerequisite for accurate pathology classification. In this section, the term feature will describe a quantitative representation of the detected, localized, or segmented regions of the image, which are often specific parts of the retinal anatomy. We chose features that can best classify a retinal image for the specific problem, whether it is binary such as “diseased” or “nondiseased,” or varying degrees of a specific pathology. Most of these methods provide input into pattern recognition algorithms that use machine-learning methods for classification. Segmentation and image registration problems can also use feature extraction to localize or detect parts of the retinal anatomy. We will discuss several feature extraction methods that convert low-level information (e.g. pixel intensities) into robust numerical descriptors that can classify retinal pathologies and/or segment anatomy with high levels of specificity.
2.4.1. Statistical Features
After an image has been successfully segmented into its constituent parts, these regions of interest’s size, shape, color, and texture can provide simple and powerful values for discriminative classification. These descriptors are scale and rotationally invariant, which increases the robustness
70
Computational Methods for Feature Detection in Optical Images
and accuracy of classifying non-registered or orientation-corrected images. Microaneursym detection and classification using these types of features has been used to classify the degrees of diabetic retinopathy.23 We will provide the details on how to compute the three most common methods of statistical descriptors.
2.4.1.1. Geometric descriptors
First, we will describe geometric descriptors. Assuming that a region of connected components has been established through segmentation, characteristics of the geometry can provide discriminative classification specificity. These values only take into account the overall shape from linked boundary pixels, providing illumination invariant features sets. These scalar descriptors, listed below, also make entry into a classification algorithm straightforward.
Area is a simple sum of pixels contained within a bounded region. Although not overly useful alone, it is typically needed for finding more interesting shape statistics.
Centroid is the center of mass or the geometric center of a region, which provides insight on region localization at a single point (x, y). This location can be used as an axis for orientation adjustment for registration or feature extraction that is not rotationally invariant.
Eccentricity measures how circular a region’s shape is. Enclosing a region in an ellipse, the eccentricity is measured as the scalar value of the ratio between the distance from the center to the focus of the ellipse, c, and its major axis length, a:
Eccentricity = c .
a
This value will be zero if the region is a line and one if the region is a circle. This shape descriptor is useful when classifying hemorrhages.22
Euler number is a scalar value, E, that tabulates the number of objects, O, in a region subtracted from the number of holes, H : E = O − H . Knowing the number of holes in a region provides insight that can be used to decide how to extract descriptors.
Extent is a scalar ratio of pixels between the bounding box and region area. This value gives an idea of the overall density of the region.
71
Michael Dessauer and Sumeet Dua
Axis lengths are the major and minor axis lengths, in pixels, of the elliptical representation of the region, used in computing orientation.
Orientation is the angle between the x-axis and the major axis. Perimeter gives the length of the boundary of the region. Compactness is the scalar measure of the ratio of the square of the
perimeter, p, and the area, a, given as: compactness = p2/a, which provides a minimal value for circular regions.
We use the shape of boundary segments to create a boundary histogram, p(vi), by representing the curve of a boundary as a 1D function, s(x), connecting its end-points and rotating to a horizontal access (Fig. 2.23). The amplitude at each boundary point is the distance from the x-axis to the boundary. We select an arbitrary number, A, of amplitude bins for the histogram and use p(vi) as the estimate of the probability of amplitude bin value vi occurring in the boundary segment. The nth moment of v about its mean is8:
|
A−1 |
|
µn(v) = |
|
|
(vi − m)np(vi), |
(2.41) |
|
where |
i=0 |
|
|
|
|
|
A−1 |
|
m = |
|
|
vi p(vi), |
(2.42) |
i=0
showing that m is the mean value of v and the second moment gives us the variance. These rotationally invariant shape descriptors retain physical boundary information, requiring only the first few moments to represent unique shape characteristics. Figure 2.23 shows the creation of 1D boundary histogram.
Fig. 2.23. (a) Segmented region, (b) top and bottom boundaries aligned on the x-axis, and (c) boundary magnitude histogram.
72
Computational Methods for Feature Detection in Optical Images
Although most of these geometrical descriptors are straightforward to calculate, they provide insight for refining choices on which regions are suitable for further analysis and as features for classification. These descriptors are not the only methods that can characterize regional information of boundaries and shapes, but do provide a good introduction into how scalar values can represent complex shapes.
2.4.1.2. Texture features
Another important statistical description of a segmented region of interest is the intra-region intensity distribution, which is loosely referred to as the texture. The texture of an image region can have multiple values, depending on how the internal pixels are represented. We will discuss two sets of texture feature descriptors that provide discrimination for retinal pathology classification: histogram-based texture features and spatial textures through co-occurrence matrices.
Using the gray-level histogram (explained in Sec. 2.2.2.3), we can calculate n-level moments, each of which provides insight into the characteristics of the intensity distribution, often a discriminate descriptor for classification. First, let z be a random variable with an ROI histogram p(zi), i = 0, 1, 2, . . . , L−1. We use the moment equation to find the nth moment,8
L−1 |
|
|
|
|
|
µn(z) |
(zi − m)np(zi), |
(2.43) |
i=0 |
|
|
where m is the mean value of z: |
|
|
m = |
L−1 |
|
zi p(zi). |
(2.44) |
|
|
i=0 |
|
|
|
|
The most straightforward moment is the second moment, better known as the standard deviation. This value measures the intensity contrast, characterizing the relative smoothness of the ROI. The third moment is a measure of skewness, and the fourth moment describes the relative flatness. Other histogram-based texture descriptors include “uniformity,” which is formulated as
L−1 |
|
|
|
U = p2(zi), |
(2.45) |
i=0
73
Michael Dessauer and Sumeet Dua
which has a maximum value when all pixels are the same (or “uniform”). Entropy, on the other hand, is a calculation of variability and is zero for a constant intensity, given as
L−1 |
|
|
|
e = − p(zi) log2 p(zi). |
(2.46) |
i=0
Histogram-based texture features are useful due to their rotational invariability, but lack discrimination for instances where the spatial textures (pixel intensity positions) provide characteristic class information.
Representing a segmented region as pixels with location and intensity I(x, y), we create a set of co-occurrence matrices that tabulate how often a pair of pixels with similar intensity values “occur” in a specific orientation from one another. The user can define an orientation of 0◦ (pixels I(x, y) and I(x + 1, y)), 45◦ (pixels I(x, y) and I(x + 1, y − 1)), 90◦ (pixels I(x, y) and I(x, y − 1)), and −45◦ (pixels I(x, y) and I(x + 1, y + 1)), that will produce a square matrix, G, the size of intensity bins, k, in each dimension. The value at each location Gi,j will be the total number of times two pixels of a chosen orientation having similar intensity values “occur” within the region. After constructing co-occurrence matrices in the principal directions (typically 0◦, 45◦, 90◦, and −45◦), we normalize the matrices to give a joint probability occurrence of pixel pairs with the corresponding orientation and intensity range. We can find a set of spatially dependent texture descriptors termed Harilick features24:
Contrast: |
|
|i − j|2p(i, j), |
|
||
|
i, j |
|
|
|
|
Correlation: |
|
|
(i − µi)(j − µj)p(i, j) |
, |
|
|
|
|
|
|
|
|
i, j |
|
σiσj |
|
|
|
|
|
|
|
|
Energy: |
|
p(i, j)2 and |
|
||
|
|
||||
|
i, j |
|
|
|
|
Homogeneity: |
p(i, j) |
|
|||
i, j |
1 + |i − j| |
. |
|
||
Contrast measures intensity differences between neighboring pixels over the image, where correlation returns a measure of how correlated a pixel
74
