- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Michael Dessauer and Sumeet Dua
µ30 = M30 − 3xM¯ 20 + 2x¯2M10, and
µ03 = M03 − 3yM¯ 02 + 2y¯ 2M01.
From these moments, ηij can be made invariant to translation and scale by dividing by the scaled 00th moment, written as:
ηij = |
µij |
|
. |
(2.49) |
|
1+ l+2j |
|
||||
|
|
|
|
|
|
|
µ00 |
|
|
|
|
Finally, the seven Hu moments, ϕi
ϕ1 = η20 + η02,
ϕ2 = (η20 + η02)2 + (2η11)2,
ϕ3 = (η30 + 3η12)2 + (3η21 + η03)2, ϕ4 = (η30 + η12)2 + (η21 + η03)2,
ϕ5 |
= (η30 − 3η12) + (η30 + η12)[(η30 + η12)2 − 3(η21 + η03)2] |
|
+ (3η21 − 3η03)(η21 + η03)[3(η30 + η12)2 − (η21 + η03)2], |
ϕ6 |
= (η20 − η02) + [(η30 + η12)2 − (η21 + η03)2] |
|
+ 4η11(η30 − η12)(η21 − η03), and |
ϕ7 |
= (3η21 − η03)(η30 + η12)[(η30 + η12)2 − 3(η21 + η03)2] |
|
+ (η30 − 3η12)(η21 + η03)[3(η30 + η12)2 − (η21 + η03)2]. |
In Fig. 2.24, we show that Hu moments produce almost identical values for rotated, scaled, and translated versions of a segmented region. Invariant moments work well in situations in which both the orientation of a segment can be unreliable, and characteristics of the region’s spatial intensity provide important information for classification.
2.4.2. Data Transformations
These next feature descriptor sets use methods to transform the data from the spatial domain, or Euclidean space, to coordinate systems that provide different types of characterization for an ROI that is not obvious or directly calculable from the image matrix. We will briefly describe and discuss
76
Computational Methods for Feature Detection in Optical Images
Fig. 2.24. (Top: left–right): an original segmented optic disk, scaled 2x, rotated 45◦, rotated 45◦ and translated, (bottom) table of Hu moments for each image.
two methods that can both contribute useful discriminative information to a classification algorithm and reduce data dimensionality by retaining only data of interest. As you will see, computational methods for image transformations use classic linear algebraic operations to produce refined sets of data that can better represent image regions in a reduced, descriptive domain.
2.4.2.1. Fourier descriptors
The frequency domain describes an image not by its intensities at specific (x, y) locations on a 2D matrix, but as magnitudes of periodic functions of varying wavelengths. Although the functions’ coefficients no longer retain any spatial information, they are divided into low, middle, and high frequencies with low frequencies providing overall structural information and high frequencies containing the detail. Feature descriptors can be localized from this set of Fourier coefficients, with low frequency coefficients retaining region structure (Fig. 2.25). To obtain the 2D discrete Fourier transform coefficients of a region f(x, y) of size M × N, we use the equation26
|
|
1 |
M−1 N−1 |
ux vy |
|
|
|||||
|
F(u, v) = |
|
|
f(x, y)e−f 2π M + N , |
(2.50) |
||||||
|
MN |
x |
= |
0 y |
= |
0 |
|||||
|
|
|
|
|
|
|
|
|
|
||
with F(u, v) |
being |
calculated |
for |
u |
= 0, 1, 2, . . . , M − 1 |
and v |
= |
||||
0, 1, 2, . . . , N |
− 1. |
These values |
for |
F(u, v) can be directly used |
as |
||||||
77
Michael Dessauer and Sumeet Dua
Fig. 2.25. (a) Original ROI, (b) visualization of a center-shifted 2D discrete Fourier transform,
(c) reduced set of Fourier coefficients, and (d) 2D inverse transform of the reduced coefficients.
descriptors, or we can set values of F to zero and transform back to Euclidean space using the inverse transform equation:
M 1 N 1 |
F(u, v)ej2π uxM + vyN . |
(2.51) |
|
F(x, y) = u −0 |
v −0 |
||
|
|
|
|
= |
= |
|
|
for and x = 0, 1, 2, . . . , M − 1 and y = 0, 1, 2, . . . , N − 1. In Fig. 2.25, we demonstrate how a Fourier transform can retain the structural information of a 166×133 region using a reduced set of 13×13 Fourier descriptors.A subset of frequency-domain descriptors has previously been used to extract blood vessels details from retinal images successfully.27 Although not rotation, scale, or translation invariant, a linear operation with constant values in the frequency domain can correct for the identified spatial changes.8
2.4.2.2. Principal component analysis (PCA)
Another linear transformation that creates a useful feature set is principal component analysis, PCA. This method is defined as an orthogonal linear transformation of data projected onto a new coordinate system, where the greatest variance of any projection of the data lies on the line that passes through the first coordinate (principal component), with the second greatest variance on the second coordinate, and so on.28 This method reduces a set of variables by calculating the eigenvalues of the covariance matrix, after a mean normalization occurs for each attribute.
We show how to use PCA with image regions by first vectorizing an M × N image region into a vector, v, with M N dimensions. We then take a set, K, of vectorized images, which creates M N sets of 1D vectors, x,
78
Computational Methods for Feature Detection in Optical Images
of size k:
x1
x2
x = . , i = 1, 2, . . . , M N. (2.52)
..
xk
We then construct a vector of mean values, mx , calculated for each i as
K
mx = 1 xk , (2.53)
K k=1
which we can then use to form a covariance matrix, Cx , which is written as
1 |
K |
|
|
|
|
|
|
Cx = |
K |
xk xkT − mk mkT . |
(2.54) |
|
|
k=1 |
|
Next, let A be a 2D matrix with rows that are filled with eigenvectors of Cx sorted so that eigenvector rows correspond to eigenvalues of descending size. We use A to map x values into vectors, y, using the following:
y = A(x − mx ). |
(2.55) |
Using matrix algebra, we can find the covariance matrix Cy by |
|
Cy = ACx AT , |
(2.56) |
which has diagonal terms that are the eigenvalues of Cx . We can then recover any x from a corresponding y without using the entire matrix A. We can use only the k large eigenvectors, using a transform matrix, Ak , of size k × n by using the following:
xˆ = AkT y + mx . |
(2.57) |
This method is used in optic disk localization by normalizing and registering a set of well-defined M × N optic disk regions, then vectoring each image to form a set of 1 − D M N vectors. The PCA transform is performed first on a set of training image vectors, and the top six eigenvalues are kept as features to represent the training set. Tested ROIs are projected onto these eigenvectors, and a Euclidean distance metric is used to measure optic disk “likeness.” We provide an example of how these lower dimensional eigenvectors reconstruct an optic disk using a training image convolved with differing Gaussian filters in Fig. 2.26.
79
