- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Chapter 10
Computer-Aided Diagnosis
of Diabetic Retinopathy Stages
Using Digital Fundus Images
Rajendra Acharya, U. , Oliver Faust , Sumeet Dua ,†,
Seah Jia Hong , Tan Swee Yang , Pui San Lai
and Kityee Choo
10.1. Introduction to Diabetic Retinopathy
Diabetes is a condition in which an individual’s blood sugar-level exceeds the normal range. Prolonged diabetes damages small blood vessels in the retina resulting in diabetic retinopathy (DR). Early stage DR can go undetected, because it often progresses through the first stage without causing debilitating symptoms. Damage only occurs in the later stages. Hence, regular cost-effective eye screening for diabetic subjects is beneficial. This chapter documents a system that can automatically screen massive numbers of the images of varying DR patients at different stages. In this work, 120 digital fundus images were analyzed and classified into three groups: images that showed normal retinas, images that showed nonproliferative DR (NPDR) (mild, moderate, and severe DR), and images that showed proliferative DR (PDR). Features such as hemorrhages, exudates, blood vessels, and textures were extracted from the retinal images. These features were programmed into an artificial neural network (ANN) for automated classification. The proposed system delivers an average classification accuracy of 96.67%, sensitivity of 100%, and specificity of 95%.
Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore.
†Department of Computer Science, College of Engineering and Science, Louisiana Tech University, Louisiana, USA.
301
Rajendra Acharya, U. et al.
DR is one of the most diagnosed causes of vision loss around the world.1 More specifically, DR represents an end-organ response to a systemic disease, and is a significant risk factor for vision loss and blindness. Individuals with diabetes are 25 times more likely to become blind than individuals without diabetes2 because the retina is the innermost layer of the eye, the earliest disease-related changes can be seen in it.
The retina is a thin layer of tissue that covers the inner section of the eye. The tissue is made of chemical photo detector cells. These detector cells consist of two common types: rods and cones. Rods function in dim light, providing black-and-white vision; cones aid in daytime vision and color perception. A third, much rarer type of photoreceptor, the photosensitive ganglion cell, is important for receiving responses to bright daylight. These detector cells are not equally dense within the retina area. This uneven denseness is considered sampling with an unequal (spatial) sampling interval. The result of the detection process is an electrical signal, which is sent to the brain via the optic nerve. Figure 10.1 shows the structure of the retina.
The optic disc is an area of the eye where the optic nerves leave the eye, leading to the brain; therefore, there are no detector cells in this area. This lack of detector cells causes a break in the visual field, called a “blind spot.” The macula is an oval spot located in the center of retina and is responsible for the acuteness of vision. The fovea is located in the center of macula. It helps to discriminate colors and has the highest density of photo detector cells.
Experts differentiate three DR stages. The following list provides a brief description of these stages.3 In the mild NPDR stage, at least one
Blood vessels
Optic disc
Fovea
Fig. 10.1. Structure of normal retina.
302
Computer-Aided Diagnosis of Diabetic Retinopathy Stages
microaneurysm (MA) is present. The MAs that are present may or may not contain hard exudates, cotton wool spots, or hemorrhages. In moderate NPDR, more MAs and retinal hemorrhages exist in the retina. Cotton wool spots and a slight venous beading may also be present. In severe NPDR, more MAs, retinal hemorrhages, and hard exudates appear. In PDR, large areas of the retina are starving. Because of diet changes caused by diabetes, the blood vessel tree is unable to provide enough nourishment to the retina. The signals sent by the retina for nourishment pave the way for the growth of new, fragile blood vessels, which may leak. Such leakage can lead to severe vision loss and blindness.
DR is a common cause of vision loss among working-class people in developed countries.4 Laser photocoagulation may slow down the disease progression if the disease is detected early. Digital fundus images, from diabetes subjects, should be taken every year, and these images should be examined in order to avoid the progression of the disease by administering the right treatment at the right time.5 The use of digital image-processing and data-mining techniques for automatic DR stage detection in the area of screening DR have increased with improved technology.6
Computer-aided diagnosis systems assist physicians in detecting abnormalities in retina fundus images.7 Hayashi et al. proposed a method that can identify blood vessel intersections and, at the same time, detect abnormal widths in blood vessels.7 The method was tested using 450 fundus images.
Z. Xiaohui and O. Chutatape presented three-step approaches for the detection and classification of bright lesions fundus images.8 The steps include local contrast enhancement (pre-processing stage), improved fuzzy C-means, and hierarchical support vector machine (SVM) classification. Their results show that it is possible to classify bright nonlesion areas, exudates, and cotton-wool spots.
A neural network was used to detect diabetic features, namely, vessels, exudates, and hemorrhages in fundus images and compare the network performance to an ophthalmologist screening.9 Blood vessels, exudates, and hemorrhages were detected with accuracy rates of 91.7%, 93.1%, and 73.8%, respectively. The system can aid healthcare professionals during screenings of diabetic patients.
Niemeijer et al. presented a method to detect red lesions based on the hybrid approach.10 They were able to detect the red lesions with a sensitivity
303
Rajendra Acharya, U. et al.
of 100% at a specificity of 87%. This method detects red lesions better than several other automatic systems and as well as human experts.
Li et al. proposed a new method to measure the severity of retinal arteriolar narrowing the arteriolar-to-venular diameter ratio.11 The blood vessels were detected using a combined Kalman filter and Gaussian filter. Their results indicate 97.1% accuracy for identifying vessel starting points, and an accuracy of 99.2% for tracking retinal vessels.
DR may advance from mild to severe NPDR. Vallabha et al. proposed a novel method to automatically detect and classify vascular abnormalities in DR.12 This method detects vascular abnormalities using scale and orientation selective Gabor filter banks. The proposed method classifies fundus images into mild and severe classes.
In our work, we have extracted four features using image-processing techniques and input are fed to a neural network classifier for automatic classification. Figure 10.2 shows an overview diagram of the proposed scheme. The layout of the chapter is as follows: In Sec. 10.2, we address both data acquisition process and preprocessing of the raw images. In Sec. 10.3, we present the feature extractions, using image processing and in Sec. 10.4, we
|
|
|
|
Feature Extraction |
|
Input Image |
|
Image Processing |
|
1. |
Blood vessels |
|
|
2. |
Exudates |
||
|
|
Techniques |
|
||
|
|
|
3. |
Hemorrhages |
|
|
|
|
|
||
|
|
|
|
4. |
Texture |
|
|
|
|
|
|
Classification
ANN classifier
Norm 
NPDR 
PDR
Fig. 10.2. Proposed system.
304
