- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Michael Dessauer and Sumeet Dua
Fig. 2.26. (a) Three optic disk images smoothed with Gaussian filters of increasing scale and (b) first component from PCA transform, containing the largest variability in the image.
2.4.3. Multiscale Features
We will next discuss representing a 2D image with varying degrees of scale through computational operations that reduce the resolution, yet retain multiple levels of detail that can be used as feature descriptors. The texture or shape of an image region at its native resolution can provide an excessive amount of unnecessary detail or noise artifacts, which can reduce classification accuracy. In addition, discriminative characteristics of an image region may occur only at certain reduced scales, which would go unnoticed at a lower scale. We will describe two methods that reduce an image’s dimensionality at multiple levels (scales), representing the image as 3D pyramid structure of linearly decreasing resolutions to more easily extract discriminative features. We will give examples of how such methods can be used to extract features in retinal images.
2.4.3.1. Wavelet transform
We will discuss an image transformation that creates a multi-scale pyramid of coefficients that can each represent the ROI at different resolutions. Unlike the Fourier transform, wavelet transformations retain spatial and texture information, which can then be used as input in other feature extraction methods, such as those described above. Wavelets retain this information
80
Computational Methods for Feature Detection in Optical Images
because their basis functions (or wavelets) are localized in the image, where a Fourier basis function spans the entire image. One such method is the discrete wavelet transform.
Although a full explanation of wavelet theory will not be provided here, we will give the mathematical formulation of the 2D discrete wavelet transform,29 which is typically the wavelet transform used with images. We first need to chose basis functions, which includes a wavelet, ψ(x, y), and a scaling function, ϕ(x, y). Both functions are linearly separable, so that the 2D combinations give information for a horizontal (H), vertical (V), and diagonal (D) direction:
ψH (x, y) = ψ(x)ϕ(y); |
(2.58) |
ψV (x, y) = ϕ(x)ψ(y); |
(2.59) |
ψD(x, y) = ψ(x)ψ(y). |
(2.60) |
Each of these values describes the variations at a particular point along the specified direction. The linear combination of the scaling function produces:
ϕ(x, y) = ϕ(x)ϕ(y).
We use scaling and translation variables with the functions above to position the function correctly before convolving with the image of size M × N using the following formula:
|
j |
ϕ(2j x − m, 2j y − n), |
|
ϕj,m·n(x, y) = 2 2 |
(2.61) |
||
ψj,mi |
j |
ψi(2j x − m, 2j y − n), |
i = {H, V, D}. |
·n(x, y) = 2 2 |
|||
The 2D discrete wavelet transform of the function f(x, y) of an image of size M × N is
|
|
1 |
M−1 N−1 |
|
|
|
Wϕ(j0, m, n) = |
√ |
|
|
(2.62) |
||
|
|
f(x, y)ϕj0,m·n(x, y) |
||||
|
|
MN x=0 |
y=0 |
|
|
|
1 |
M−1 N−1 |
|
|
|||
Wψi (j, m, n) = |
√ |
|
|
|
i = {H, V, D}. |
|
MN |
x=0 |
f(x, y)ψj,mi |
·n(x, y), |
|||
|
|
|
y=0 |
|
|
|
j0 is the starting scale, which is typically set to j0 = 0, then set to N = M = 2j , j = 0, 1, 2, . . . , J − 1, and m, n = 0, 1, 2, . . . , 2j − 1. We can
81
Michael Dessauer and Sumeet Dua
Fig. 2.27. (a) Original ROI, (b) 2D discrete wavelet-transform image pyramid, and (c) third-level reconstruction from approximation coefficients.
then perform the inverse discrete wavelet transform to find f(x, y) using
f(x, y) = |
√ |
1 |
|
Wϕ(j0, m, n)ϕj0,m·n(x, y), |
|
(2.63) |
|||||
|
|
|
|||||||||
|
|
|
|
|
|||||||
MN |
m |
n |
|
||||||||
|
|
|
|
1 |
|
|
∞ |
|
|
|
|
+ |
|
√ |
= |
|
(2.64) |
||||||
|
|
|
|
|
Wψi (j, m, n)ψi,mi |
·n(x, y). |
|||||
|
|
|
|
MN |
i |
H,V,D j=j0 |
m n |
|
|
||
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|||
We use the simplest wavelet function, the Haar wavelet (Fig. 2.27), to construct a wavelet decomposition pyramid, containing details in the horizontal, vertical, and diagonal directions at different sizes. We can use these details to extract shape, texture, and moments at multiple scales, thus, increasing available ways to describe a ROI. The approximation of the image at varying resolutions also provides a compact representation of a region, reducing dimensionality in cases where fine details of are not necessary.
A method similar to that illustrated in Fig. 2.26 is used to localize the optic disc by performing a fourand five-level Haar wavelet decomposition, with the optic disc reduced to a small cluster of coefficients.30
2.4.3.2. Scale-space methods for feature extraction
As discussed in the previous section, we are interested in finding discriminative characteristics of a ROI, many of which occur over varying spatial scales. Several methods other than wavelet decomposition can provide a scale-space representation by convolving an image with kernels of varying size, typically suppressing details of varying scales. We will discuss two sets of scale space representations that can be used to extract multi-scale
82
Computational Methods for Feature Detection in Optical Images
features from the retinal anatomy: difference of Gaussians and Hessian determinants.
As discussed in Sec. 2.2.2.2, Gaussian kernels smooth an image by varying amounts based upon the overall size and magnitude of the discrete 2D Gaussian curve. We can use subsequently smoothed images, L(x, y; t), convolved with increasing kernel sizes (t = σ2) to create a set of difference images, written as
G(x, y; t1, t2) = |
2πt1 e− |
2t1 |
− |
2πt2 e− |
2t2 |
, |
(2.65) |
||
|
1 |
|
x2+y2 |
|
1 |
|
x2+y2 |
|
|
which reveal scale-specific features of the ROI (Fig. 2.28). This method can also be used to approximate the Laplacian of the Gaussian (LoG as discussed in Sec. 2.3.1.3) for edge detection. We can now perform shape and texture feature extraction operations to the new set of multi-scale images to find a scale-specific features.
We can derive a set of 2D matrices that find corners for each smoothed image, L(x, y; t) by first calculating the Hessian matrix, which is the square
Fig. 2.28. Top-left: Gaussian smoothed image with increasing t; top-right: difference of Gaussian images (DoG); bottom-left: determinant of Hessian matrices with increasing t; and bottom-right: feature location (markers) and scale (radius of circle) from Hessian scale space.
83
