- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Thomas P. Karnowski et al.
Fig. 14.7. Example of retinal vessel segmentation.
method of choice. The study found the method of Ref. [35] virtually identical to that of the second observer. However, several methods scored almost as well. In our work, we have implemented one of the studied methods.36,37 In their method, linear elements are identified and enhanced by the use of morphological reconstruction techniques38 followed by filtering and noise suppression techniques. For our current application, we are less interested in using the vascular tree to identify disease states and more interested in its measure of image quality and the localization of the ON and fovea. Our implementation in the telemedical network is fast, with a mean time estimate of less than one second running on a 1.66-GHz Intel-based PC server. An example of the image segmentation is shown in Fig. 14.7.
14.2.2. Quality Assessment
QA of fundus images is another topic of interest in the literature, and various methods have been proposed.39−42 In our application, we found that most methods were either computationally too expensive or did not perform as well as desired. Thus, the QA method we use is ideal for a telemedicine network, for which it was created, due to its speed. The method allows quick feedback to the camera operator and requires no more computational power than is available on a regular personal computer commonly used in a fundus camera platform.We note, however, that while the method can be run on such a platform, in our current implementation, the quality check is run on the server for development purposes. We summarize the method here and note that more detail is available in Ref. [43].An overview of the method is shown in Fig. 14.8. First, as discussed above, a segmentation of the vascular tree is
432
Automating the Diagnosis, Stratification, and Management of DR
Fig. 14.8. Image QA processing.44
conducted. The image of the vascular tree is divided into annular and wedgeshaped regions to estimate accurately the coverage of the vascular tree. Poorquality images will not achieve adequate coverage in some regions of the vessel coverage and, thus, will allow a pattern recognition engine to identify the image as an outlier (and, thus, of insufficient quality). The color of the fundus is also measured by using a histogram of the RGB values. These measurements are combined to build a complete feature vector that describes these aspects of the image, which our engineering efforts have indicated are key to quality. During development, a training set of images were manually labeled as good quality and poor quality, then a pattern recognition engine
433
Thomas P. Karnowski et al.
was trained to distinguish the two classes based on the measured numeric values. After creating the classifier, an additional set of unseen images was classified as a testing set, and a reviewing ophthalmologist (E. Chaum) sets a threshold. Note that this initial threshold is set to provide images of sufficient quality to be diagnosed by a human; generally, we note that this quality level may not be sufficient for automated diagnosis, and we show that improved results can be obtained when submitted images are of better quality images. An example of images of different quality from our telemedical network is shown in Fig. 14.9.
One important requirement in our implementation is standard imagecapturing methods as given by the fundus camera. Our imaging protocol uses single-field, macula-centered images so that the vessel tree always has a distinctive shape that changes only slightly with the retinal physiology and due to the positioning of the macula or eye movement during imaging.
Fig. 14.9. Examples of acquired images of different qualities. A threshold of 0.4 was established by empirical methods.
434
Automating the Diagnosis, Stratification, and Management of DR
In addition, the images submitted to the telemedical network are already labeled as right or left eye, and our method by virtue of its training examples is sensitive to this fact. Thus, we have two QA instances running, one for the left eye and the other for the right eye. Another issue is the field of view of the fundus photography; our original intention was to work with a 45-degree field of view, and, in some of our installations, this is the case, but others have begun to utilize 30-degree field of view, because it is easier to get consistent images in that setting. This setting is easily customizable by multiple classifiers, as well, and it is very simple to identify these different conditions based on the metadata of the images submitted.
We also note that the telemedical network allows the operator to have a “best-we-can-do” setting (BWCD). In this “mode,” if an image fails a quality check, it may still be submitted for physician review. This option allows the system to (1) allow for potential failures in the quality estimation process, since the threshold chosen is not foolproof, (2) allow for the possibility that additional fundus images cannot be obtained, either because the patient has left or is otherwise unavailable, and (3) allow for possible intrinsic difficulty in obtaining fundus images from patients. In these cases, the image quality may still be sufficient for manual review. We save the vasculature tree segmentation, which can then be used in the ON detection method without having to be recomputed. Results from the initial network are shown in Fig. 14.10 where the quality levels of the images collected during the first eight months of operations for our first clinic are shown. We see from these results that the QA module is working to improve the quality of images submitted.
14.2.3. ON Detection
ON detection is a mature field, and there are many different ON detection methods that have been applied to a wide variety of data sets (see, for example, Refs. [15, 35, 45, and 46], which reference a publicly available STARE database). In our work, we have studied and implemented two methods. The first method relies on extracted features of the vessel segmentation.47 Some methods emphasize that vessel segmentation can be a “weak link” in the estimate of the ON location. However, in our implementation, this problem is not an issue, because images with poor vessel segmentation will
435
Thomas P. Karnowski et al.
Fig. 14.10. Quality levels over an eight-month period. The overall quality of the image capture has improved after a roughly 2–3 month initial learning phase.
result in a failure in the QA and bypass automatic analysis. We summarize the processing portion of this method here.
The segmented vasculature tree of the image is utilized, and a set of four features are generated at each pixel by a neighborhood operation. The first feature is the brightness of the region. The remaining three features are measured on the vascular tree. The thickness of the vessels is measured by taking vessels in an area of interest and performing a thinning operation, which reduces all vessels to a single-pixel thickness. The difference between the thinned result and the original segmentation is measured perpendicular to the vessel direction and averaged within the neighborhood of the target pixel to estimate the thickness of the tree in that region. The second feature is the orientation of the vessels, which is estimated by measuring the vessel directions, scaled so that vertical vessels are emphasized over horizontal vessels. Finally, a measure of the density of the vasculature tree is measured in each neighborhood as well by simply summing up the number of estimated vessel pixels in this region. These operations are performed on a training set of images with hand-labeled ON centers. Feature values within the ON radius are chosen as examples of “ON,” and all others are chosen as examples of “non-ON.” These values are then used to estimate
436
Automating the Diagnosis, Stratification, and Management of DR
the parameters of a 4D Gaussian distribution describing the ON regions and the non-ON regions. We also use the hand-segmented training set to estimate the ON center probability density function (PDF). The use of this model is justified, because, as mentioned earlier, our imaging protocol is macula-centered images. The Gaussian parameters and the PDF are used to compute a likelihood ratio function using maximum a posteriori (MAP) estimation. Results on a pair of data sets are shown in Ref. [47], but these are more difficult data sets than seen in a true telemedical network. In our implementation, the results of preliminary testing are more accurate. These results are shown in Fig. 14.11.
Because one of our goals in our implementation is a level of physician oversight, we have also studied a complementary method of ON detection that does not rely on vascular tree segmentation. This method, which we will identify as the PCA-LDA method, is based on the model-based method of Ref. [18], which uses principal component analysis (PCA) to capture the information content of a training set of ON images. In their work, “candidate regions” of a retina image are projected to PCA space and then used to reconstruct the regions. The pixel with the smallest residual error is chosen as the ON location. In Ref. [48], we extended the method to include labeled
Fig. 14.11. Histogram of ON location estimates for a test set from the network. A normalized distance of 0.5 or smaller indicates the ON location estimate.
437
