- •Contents
- •1.1. Introduction to the Eye
- •1.2. The Anatomy of the Human Visual System
- •1.3. Neurons
- •1.4. Synapses
- •1.5. Vision — Sensory Transduction
- •1.6. Retinal Processing
- •1.7. Visual Processing in the Brain
- •1.8. Biological Vision and Computer Vision Algorithms
- •References
- •2.1. Introduction to Computational Methods for Feature Detection
- •2.2. Preprocessing Methods for Retinal Images
- •2.2.1. Illumination Effect Reduction
- •2.2.1.1. Non-linear brightness transform
- •2.2.2. Image Normalization and Enhancement
- •2.2.2.1. Color channel transformations
- •2.2.2.3. Local adaptive contrast enhancement
- •2.2.2.4. Histogram transformations
- •2.3. Segmentation Methods for Retinal Anatomy Detection and Localization
- •2.3.1. A Boundary Detection Methods
- •2.3.1.1. First-order difference operators
- •2.3.1.2. Second-order boundary detection
- •2.3.1.3. Canny edge detection
- •2.3.2. Edge Linkage Methods for Boundary Detection
- •2.3.2.1. Local neighborhood gradient thresholding
- •2.3.2.2. Morphological operations for edge link enhancement
- •2.3.2.3. Hough transform for edge linking
- •2.3.3. Thresholding for Image Segmentation
- •2.3.3.1. Segmentation with a single threshold
- •2.3.3.2. Multi-level thresholding
- •2.3.3.3. Windowed thresholding
- •2.3.4. Region-Based Methods for Image Segmentation
- •2.3.4.1. Region growing
- •2.3.4.2. Watershed segmentation
- •2.4.1. Statistical Features
- •2.4.1.1. Geometric descriptors
- •2.4.1.2. Texture features
- •2.4.1.3. Invariant moments
- •2.4.2. Data Transformations
- •2.4.2.1. Fourier descriptors
- •2.4.2.2. Principal component analysis (PCA)
- •2.4.3. Multiscale Features
- •2.4.3.1. Wavelet transform
- •2.4.3.2. Scale-space methods for feature extraction
- •2.5. Summary
- •References
- •3.1.1. EBM Process
- •3.1.2. Evidence-Based Medical Issues
- •3.1.3. Value-Based Evidence
- •3.2.1. Economic Evaluation
- •3.2.2. Decision Analysis Method
- •3.2.3. Advantages of Decision Analysis
- •3.2.4. Perspective in Decision Analysis
- •3.2.5. Decision Tree in Decision Analysis
- •3.3. Use of Information Technologies for Diagnosis in Ophthalmology
- •3.3.1. Data Mining in Ophthalmology
- •3.3.2. Graphical User Interface
- •3.4. Role of Computational System in Curing Disease of an Eye
- •3.4.1. Computational Decision Support System: Diabetic Retinopathy
- •3.4.1.1. Wavelet-based neural network23
- •3.4.1.2. Content-based image retrieval
- •3.4.2. Computational Decision Support System: Cataracts
- •3.4.2.2. K nearest neighbors
- •3.4.2.3. GUI of the system
- •3.4.3. Computational Decision Support System: Glaucoma
- •3.4.3.1. Using fuzzy logic
- •3.4.4. Computational Decision Support System: Blepharitis, Rosacea, Sjögren, and Dry Eyes
- •3.4.4.1. Utility of bleb imaging with anterior segment OCT in clinical decision making
- •3.4.4.2. Computational decision support system: RD
- •3.4.4.3. Role of computational system
- •3.4.5. Computational Decision Support System: Amblyopia
- •3.4.5.1. Role of computational decision support system in amblyopia
- •3.5. Conclusion
- •References
- •4.1. Introduction to Oxygen in the Retina
- •4.1.1. Microelectrode Methods
- •4.1.2. Phosphorescence Dye Method
- •4.1.3. Spectrographic Method
- •4.1.6. HSI Method
- •4.2. Experiment One
- •4.2.1. Methods and Materials
- •4.2.1.1. Animals
- •4.2.1.2. Systemic oxygen saturation
- •4.2.1.3. Intraocular pressure
- •4.2.1.4. Fundus camera
- •4.2.1.5. Hyperspectral imaging
- •4.2.1.6. Extraction of spectral curves
- •4.2.1.7. Mapping relative oxygen saturation
- •4.2.1.8. Relative saturation indices (RSIs)
- •4.2.2. Results
- •4.2.2.1. Spectral signatures
- •4.2.2.2. Oxygen breathing
- •4.2.2.3. Intraocular pressure
- •4.2.2.4. Responses to oxygen breathing
- •4.2.2.5. Responses to high IOP
- •4.2.3. Discussion
- •4.2.3.1. Pure oxygen breathing experiment
- •4.2.3.2. IOP perturbation experiment
- •4.2.3.3. Hyperspectral imaging
- •4.3. Experiment Two
- •4.3.1. Methods and Materials
- •4.3.1.1. Animals, anesthesia, blood pressure, and IOP perturbation
- •4.3.1.3. Spectral determinant of percentage oxygen saturation
- •4.3.1.5. Preparation and calibration of red blood cell suspensions
- •4.3.2. Results
- •4.3.2.2. Oxygen saturation of the ONH
- •4.3.3. Discussion
- •4.3.4. Conclusions
- •4.4. Experiment Three
- •4.4.1. Methods and Materials
- •4.4.1.1. Compliance testing
- •4.4.1.2. Hyperspectral imaging
- •4.4.1.3. Selection of ONH structures
- •4.4.1.4. Statistical methods
- •4.4.2. Results
- •4.4.2.1. Compliance testing
- •4.4.2.2. Blood spectra from ONH structures
- •4.4.2.3. Oxygen saturation of ONH structures
- •4.4.2.4. Oxygen saturation maps
- •4.4.3. Discussion
- •4.5. Experiment Four
- •4.5.1. Methods and Materials
- •4.5.2. Results
- •4.5.3. Discussion
- •4.6. Experiment Five
- •4.6.1. Methods and Materials
- •4.6.1.3. Automatic control point detection
- •4.6.1.4. Fused image optimization
- •4.7. Conclusion
- •References
- •5.1. Introduction to Thermography
- •5.2. Data Acquisition
- •5.3. Methods
- •5.3.1. Snake and GVF
- •5.3.2. Target Tracing Function and Genetic Algorithm
- •5.3.3. Locating Cornea
- •5.4. Results
- •5.5. Discussion
- •5.6. Conclusion
- •References
- •6.1. Introduction to Glaucoma
- •6.1.1. Glaucoma Types
- •6.1.1.1. Primary open-angle glaucoma
- •6.1.1.2. Angle-closure glaucoma
- •6.1.2. Diagnosis of Glaucoma
- •6.2. Materials and Methods
- •6.2.1. c/d Ratio
- •6.2.2. Measuring the Area of Blood Vessels
- •6.2.3. Measuring the ISNT Ratio
- •6.3. Results
- •6.4. Discussion
- •6.5. Conclusion
- •References
- •7.1. Introduction to Temperature Distribution
- •7.3. Mathematical Model
- •7.3.1. The Human Eye
- •7.3.2. The Eye Tumor
- •7.3.3. Governing Equations
- •7.3.4. Boundary Conditions
- •7.4. Material Properties
- •7.5. Numerical Scheme
- •7.5.1. Integro-Differential Equations
- •7.6. Results
- •7.6.1. Numerical Model
- •7.6.2. Case 1
- •7.6.3. Case 2
- •7.6.4. Discussion
- •7.7. Parametric Optimization
- •7.7.1. Analysis of Variance
- •7.7.2. Taguchi Method
- •7.7.3. Discussion
- •7.8. Concluding Remarks
- •References
- •8.1. Introduction to IR Thermography
- •8.2. Infrared Thermography and the Measured OST
- •8.3. The Acquisition of OST
- •8.3.1. Manual Measures
- •8.3.2. Semi-Automated and Fully Automated
- •8.4. Applications to Ocular Studies
- •8.4.1. On Ocular Physiologies
- •8.4.2. On Ocular Diseases and Surgery
- •8.5. Discussion
- •References
- •9.1. Introduction
- •9.1.1. Preprocessing
- •9.1.1.1. Shade correction
- •9.1.1.2. Hough transform
- •9.1.1.3. Top-hat transform
- •9.1.2. Image Segmentation
- •9.1.2.1. The region approach
- •9.1.2.2. The gradient-based method
- •9.1.2.3. Edge detection
- •9.1.2.3.2. The second-order derivative methods
- •9.1.2.3.3. The optimal edge detector
- •9.2. Image Registration
- •9.4. Automated, Integrated Image Analysis Systems
- •9.5. Conclusion
- •References
- •10.1. Introduction to Diabetic Retinopathy
- •10.2. Data Acquisition
- •10.3. Feature Extraction
- •10.3.1. Blood Vessel Detection
- •10.3.2. Exudates Detection
- •10.3.3. Hemorrhages Detection
- •10.3.4. Contrast
- •10.4.1. Backpropagation Algorithm
- •10.5. Results
- •10.6. Discussion
- •10.7. Conclusion
- •References
- •11.1. Related Studies
- •11.2.1. Encryption
- •11.3. Compression Technique
- •11.3.1. Huffman Coding
- •11.4. Error Control Coding
- •11.4.1. Hamming Codes
- •11.4.2. BCH Codes
- •11.4.3. Convolutional Codes
- •11.4.4. RS Codes14
- •11.4.5. Turbo Codes14
- •11.5. Results
- •11.5.1. Using Turbo Codes for Transmission of Retinal Fundus Image
- •11.6. Discussion
- •11.7. Conclusion
- •References
- •12.1. Introduction to Laser-Thermokeratoplasty (LTKP)
- •12.2. Characteristics of LTKP
- •12.3. Pulsed Laser
- •12.4. Continuous-Wave Laser
- •12.5. Mathematical Model
- •12.5.1. Model Description
- •12.5.2. Governing Equations
- •12.5.3. Initial-Boundary Conditions
- •12.6. Numerical Scheme
- •12.6.1. Integro-Differential Equation
- •12.7. Results
- •12.7.1. Pulsed Laser
- •12.7.2. Continuous-Wave Laser
- •12.7.3. Thermal Damage Assessment
- •12.8. Discussion
- •12.9. Concluding Remarks
- •References
- •13.1. Introduction to Optical Eye Modeling
- •13.1.1. Ocular Measurements for Optical Eye Modeling
- •13.1.1.1. Curvature, dimension, thickness, or distance parameters of ocular elements
- •13.1.1.2. Three-dimensional (3D) corneal topography
- •13.1.1.3. Crystalline lens parameters
- •13.1.1.4. Refractive index
- •13.1.1.5. Wavefront aberration
- •13.1.2. Eye Modeling Using Contemporary Optical Design Software
- •13.1.3. Optical Optimization and Merit Function
- •13.2. Personalized and Population-Based Eye Modeling
- •13.2.1. Customized Eye Modeling
- •13.2.1.1. Optimization to the refractive error
- •13.2.1.2. Optimization to the wavefront measurement
- •13.2.1.3. Tolerance analysis
- •13.2.2. Population-Based Eye Modeling
- •13.2.2.1. Accommodative eye modeling
- •13.2.2.2. Ametropic eye modeling
- •13.2.2.3. Modeling with consideration of ocular growth and aging
- •13.2.2.4. Modeling for disease development
- •13.2.3. Validation of Eye Models
- •13.2.3.1. Point spread function and modulation transfer function
- •13.2.3.2. Letter chart simulation
- •13.2.3.3. Night/day vision simulation
- •13.3. Other Modeling Considerations
- •13.3.1. Stiles Crawford Effect (SCE)
- •13.3.1.2. Other retinal properties
- •13.3.1.4. Optical opacity
- •13.4. Examples of Ophthalmic Simulations
- •13.4.1. Simulation of Retinoscopy Measurements with Eye Models
- •13.4.2. Simulation of PR
- •13.5. Conclusion
- •References
- •14.1. Network Infrastructure
- •14.1.1. System Requirements
- •14.1.2. Network Architecture Design
- •14.1.4. GUI Design
- •14.1.5. Performance Evaluation of the Network
- •14.2. Image Analysis
- •14.2.1. Vascular Tree Segmentation
- •14.2.2. Quality Assessment
- •14.2.3. ON Detection
- •14.2.4. Macula Localization
- •14.2.5. Lesion Segmentation
- •14.2.7. Patient Demographics and Statistical Outcomes
- •14.2.8. Disease State Assessment
- •14.2.9. Image QA
- •Acknowledgments
- •References
- •Index
Rajendra Acharya, U. et al.
classes at the output. However, the network was trained to identify only three classes (normal, NPDR, and PDR) given by the decoded binary outputs of [00, 10, 01], respectively. Values of the blood vessels, exudates, hemorrhages, and contrast of the image were computed and fed as input to the classifier.
10.5. Results
Table 10.2 shows the analysis of variance (ANOVA) results of features extracted for normal, NPDR, and PDR images. These results show that the blood vessel area is more for NPDR and less for PDR. Exudates are absent for normal and present for NPDR. The hemorrhage area is larger for PDR, and the contrast is high for normal fundus images. All features are clinically significant (p < 0.0001).
Table 10.3 shows the classification results of the ANN classifier. We have used 90 images for training and 30 for testing. Our proposed system
Table 10.2. Results of area of blood vessels, exudates, hemorrhages, and contrast.
Features |
PDR |
NPDR |
Normal |
p-value |
||||
|
|
|
|
|
|
|
||
Blood vessel |
51,171 ± 20,652 |
343,975 ± 14,890 |
325,000 ± 20,278 |
p < 0.0001 |
||||
area |
|
± 3,149 |
|
± 2,106 |
|
|
|
|
Exudate area |
8,148 |
8,776 |
— |
p < 0.0001 |
||||
Hemorrhage |
5,103 |
± 4,329 |
2,213 |
± 2,107 |
— |
p < 0.0001 |
||
area |
|
± 0.0212 |
|
± 0.0182 |
0.0867 ± 0.0136 |
|
|
|
Contrast |
0.0698 |
0.0691 |
p < 0.0001 |
|||||
|
|
Table 10.3. Results of automatic classification. |
|
|
||||
|
|
|
|
|||||
|
|
No. of data used No. of data used Overall percentage |
||||||
|
Class |
|
for training |
|
for testing |
of success |
|
|
|
|
|
|
|
|
|
|
|
|
Normal |
|
30 |
|
10 |
100.00 |
|
|
|
PDR |
|
30 |
|
10 |
90.00 |
|
|
|
Non-PDR |
30 |
|
10 |
100.00 |
|
|
|
|
Average |
|
|
|
|
96.67 |
|
|
|
|
|
|
|
|
|
|
|
312
Computer-Aided Diagnosis of Diabetic Retinopathy Stages
classifies all normal and NPDR images correctly, and PDR images are classified correctly with an accuracy of 90%. The average classification accuracy is 96.67%. Table 10.4 shows the results of sensitivity, specificity, and positive predictive accuracy for the proposed system. The table shows
Table 10.4. Results of sensitivity, specificity, and positive predictive accuracy for the proposed system.
|
|
|
|
Positive |
|
|
True |
True |
False |
False |
predictive |
|
|
positive |
negative |
positive |
negative |
value |
Sensitivity |
Specificity |
|
|
|
|
|
|
|
10 |
19 |
1 |
0 |
90.91% |
100% |
95% |
|
|
|
|
|
|
|
Normal |
Non-Proliferative |
Proliferative |
(a)
(b)
Fig. 10.8. Results of (a) blood vessel detection and (b) exudate detection for normal, NPDR, and PDR images using image processing.
313
Rajendra Acharya, U. et al.
that sensitivity and specificity of the proposed system are 100% and 95%, respectively.
Figure 10.8 (a–c) show the results of blood vessel detection, exudates detection, and hemorrhage detection for normal, NPDR, and PDR images. These figures show that there are fewer blood vessels for normal and more for NPDR and PDR stages. Exudates and hemorrhages do not exist in normal but they are present for NPDR and PDR stages.
10.6. Discussion
A computer diagnostic system was developed to detect three early lesions: hemorrhage MAs, hard exudates, and cotton-wool spots, and to classify NPDR based on these three types of lesions using 361 images.19 The correct diagnosis rates, between computer system and reading center, for determining each lesion were 82.6%, and 88.3% for hemorrhages and MAs, hard exudates, and cotton-wool spots, respectively. The results from the proposed classification system were comparable to those provided by human experts, and can be used as a clinical aid to physicians for screening, diagnosing, and detecting NPDR.
A decision support system for the early diagnosis of DR by detecting the presence of MAs was developed by Kahai et al.20 Their results show that their support system was able to achieve sensitivity and specificity of 100% and 67%, respectively.
Four retinal conditions: normal retina, moderate NPDR, severe NPDR, and PDR were automatically classified using the area and perimeter of blood vessels of red, green, and blue layers of the images and a neural network classifier.21 Their proposed method demonstrated an accuracy of more than 80%, sensitivity, and specificity of more than 90% and 100%, respectively.
Using blood vessels, exudates, and texture parameters as well as feedforward neural network, three classes: normal, NPDR, and PDR were automatically classified.15 They identified the unknown class with an accuracy of 93%, and a sensitivity and specificity of 90% and 100%, respectively.
A low-cost screening method to identify normal and abnormal fundus images, based on exudates and lesions, was proposed.22 The classification was performed with a statistical classifier and a local-window-based
314
Computer-Aided Diagnosis of Diabetic Retinopathy Stages
verification strategy. Their method identified all retinal images with exudates with 100% accuracy and normal images as normal with 70% accuracy.
The normal, as well as mild, moderate, severe NPDR, and PDR classes were automatically classified using higher order spectra features and the SVM classifier.23 Acharya et al. demonstrated sensitivity and specificity of 82% and 88%, respectively using 300 digital fundus images.
Features, namely, blood vessels, exudates, MAs, and hemorrhages were extracted and assembled into an input vector before being fed to an SVM classifier. This classifier had to identify five groups: normal retina, mild NPDR, moderate NPDR, severe NPDR, and PDR.16 They identified the unknown class with sensitivity and specificity of 82% and 86%, respectively.
In our present work, four features, namely, blood vessels, exudates, hemorrhages, and textures were used. These parameters were presented to a neural network for automated classification. Our results show an accuracy of 96.6%, sensitivity of 100%, and specificity of 95%. The classification accuracy can be further improved by using better features, more diverse fundus images taken under good lighting conditions, and better classifiers.
10.7. Conclusion
DR is a progressive eye disease caused by prolonged diabetes. It can result in vision loss, if not detected at an early stage. In this work, we have proposed an automated system to identify normal, NPDR, and PDR fundus images using image-processing and data-mining techniques. The proposed system can identify normal, NPDR, and PDR accurately with an accuracy of 96.67%, sensitivity of 100% and specificity of 95%. Our results show that the system can help ophthalmologists to automatically identify early stage DR. Additionally, ophthalmologists can use this system as an adjunct tool for screening.
References
1.Klein, R., Klein, B.E., and Moss, S.E. Visual impairment in diabetes. Ophthalmology 91:1–9, 1984.
2.Eye Disease: http://visioneyedoctor.com/eye-disease/page-4.htm.
3.Frank, R.N. Diabetic retinopathy. Prog Retin Eye Res 14:361–392, 1995.
315
Rajendra Acharya, U. et al.
4.Ong, G.L., Ripley, L.G., Newsom, R.S., Cooper, M., and Casswell, A.G. Screening for sight-threatening diabetic retinopathy: comparison of fundus photography with automated color contrast threshold test. Am J Ophthalmol 137:445–452, 2004.
5.Fong, D.S., Aiello, L., Gardner, T.W., King, G.L., Blankenship, G., Cavallerano, J.D., Ferris, F.L., and Klein, R. Diabetic retinopathy. Diabetes Care 26:226–229, 2003.
6.http://reseau-ophdiat.aphp.fr/Document/Doc/confliverpool.pdf.
7.Hayashi, J., Kunieda, T., Cole, J., Soga, R., Hatanaka,Y., Lu, M., Hara, T., and Fujita, H. A development of computer-aided diagnosis system using fundus images. In: Proceeding of the Seventh International Conference on Virtual Systems and MultiMedia (VSMM 2001), pp. 429–438, 2001.
8.Xiaohui, Z. and Chutatape, O. Detection and classification of bright lesions in colour fundus images. Int Conf Image Process 1:139–142, 2004.
9.Gardner, G., Keating, D., Williamson, T., and Elliott, A. Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool. Br J Ophthalmol 80:940–944, 1996.
10.Niemeijer, M., van Ginneken, B., Staal, J., Suttorp-Schulten, M., and Abramoff, M. Automatic detection of red lesions in digital color fundus photographs. IEEE Trans Med Imaging 24:584–592, 2005.
11.Li, H., Hsu, W., Lee, M.L., and Wong, T.Y. Automated grading of retinal vessel caliber.
IEEE Trans Biomed Eng 52:1352–1355, 2005.
12.Vallabha, D., Dorairaj, R., Namuduri, K.R., and Thompson, H.Automated detection and classification of vascular abnormalities in diabetic retinopathy. Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004.
13.Gonzalez, R.C. and Woods, R.E. Digital Image Processing, Prentice Hall, New Jersey, Second Edition, 2001.
14.Acharya, U.R., Lim, C.M., Ng, E.Y.K., Chee, C., and Tamura, T. Computer based detection of diabetes retinopathy stages using digital fundus images. J Eng Med 223(H5):545–553, 2009.
15.Nayak, J., Bhat, P.S., Acharya, U.R., Lim, C.M., and Kagathi, M. Automated identification of different stages of diabetic retinopathy using digital fundus images. J Med Sys 32:107–115, 2008.
16.Sinthanayothin, C., Boyce, J., and Williamson, C.T. Automated localization of the optic disk, fovea, and retinal blood vessels from digital colour fundus images. Br J Ophthalmol 38:902–910, 1999.
17.Randle, V. and Engler, O. Introduction to Texture Analysis: Macrotexture, Microtexture and Orientation Mapping, RCR Press, 2000.
18.Yegnanarayana, B. Artificial Neural Networks, Prentice-Hall of India, New Delhi, 1999.
19.Samuel, C.L., Elisa, T.L., Yiming, W., Ronald, K., Ronald, M.K., and Ann, W. Computer classification of a nonproliferative diabetic retinopathy. Arch Ophthalmol 123:759–764, 2005.
20.Kahai, P., Namuduri, K.R., and Thompson, H. A decision support framework for automated screening of diabetic retinopathy. Int J Biomed Imaging: 1–8, 2006.
21.Wong, L.Y., Acharya, U.R., Venkatesh, Y.V., Chee, C., Lim, C.M., and Ng, E.Y.K. Identification of different stages of diabetic retinopathy using retinal optical images. Inf Sci 178, 2008
316
Computer-Aided Diagnosis of Diabetic Retinopathy Stages
22.Wang, H., Hsu, W., Goh, K., and Lee, M. An effective approach to detect lesions in colour retinal images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 181–187, 2000.
23.Acharya, U.R., Chua, K.C., Ng, E.Y.K., Wei, W., and Chee, C. Application of higher order spectra for the identification of diabetes retinopathy stages. J Med Sys 32:481–488, 2008.
317
This page intentionally left blank
Chapter 11
Reliable Transmission of Retinal Fundus Images with Patient Information Using Encryption, Watermarking, and Error Control Codes
Myagmarbayar Nergui , Sripati Acharya, U. , Rajendra
Acharya, U.†, Wenwei Yu‡ and Sumeet Dua§
Today, digital media are important to many aspects of entertainment, business, and medicine. Many service and sales providers, broadcast companies, wireless cellular phone service providers, and entertainment electronics companies use digital signal processing (DSP) techniques to improve their quality of service (QoS). Ophthalmologists and other eye care clinicians can leverage this trend, especially with eye images, which can be transmitted along with patient information. Patient diagnosis and image information are worked on and stored digitally. Presently, patient diagnosis (text) and retinal fundus image information compose two forms data and are read as different files. Hence, a patient diagnosis may be matched to the wrong information. This interchanging of patient information can be prevented by embedding patient diagnosis and retinal fundus image
Department of Electronics & Communication, National Institute of Technology Karnataka, Surathkal, India.
†Department of ECE, Ngee Ann Polytechnic, Singapore.
‡Graduate School of Medical System Engineering, Chiba University, Japan.
§Department of Computer Science, Louisiana Tech University, Ruston, LA 71272, USA.
319
