- •Foreword
- •Preface
- •Contents
- •1.1 Introduction
- •1.2 Method
- •1.2.1 Databases
- •1.2.2 Dates
- •1.2.3 Keywords
- •1.2.4 Criteria for Inclusion
- •1.2.5 Criteria for Exclusion
- •1.2.6 Selection of Papers
- •1.3 Results
- •1.3.1 Subspecialty
- •1.3.2 Type of Telemedicine
- •1.3.3 Study Design
- •1.3.4 Final Conclusions of Papers
- •1.4 Discussion
- •References
- •2.1 Introduction
- •2.2 The Need for Diabetic Retinopathy Screening Programs
- •2.4 Guidelines for Referring Patients
- •2.7 Program Models for Diabetic Retinopathy Screening
- •2.9 Program Personnel and Operations
- •2.9.1 Primary Care Providers
- •2.9.2 Photographers
- •2.9.3 Clinical Consultants
- •2.9.4 Administrators
- •2.9.5 A Note to CEOs, Operations Directors, and Clinic Managers
- •2.10 Policies and Procedures
- •2.10.1 Sample Protocol 1
- •2.10.1.1 Diabetic Retinopathy Screening Services
- •Policy
- •Background
- •Procedure
- •2.10.2 Sample Protocol 2
- •2.10.2.1 Pupil Dilation Before Diabetic Retinopathy Photography
- •Policy
- •Background
- •Procedure
- •2.10.3 Sample Protocol 3
- •2.10.3.1 Diabetic Retinopathy Photography Review
- •Policy
- •Background
- •Procedure
- •2.11 Technical Requirements
- •2.11.1 Connectivity
- •2.11.2 Resolution
- •2.11.3 Color
- •2.11.4 Stereopsis
- •2.11.5 Compression
- •2.11.6 Enhancement
- •2.11.7 Pupil Dilation
- •2.11.8 Early California Telemedicine Initiatives Diabetic Retinopathy Screening
- •2.11.9 The American Indian Diabetes Teleophthalmology Grant Program
- •2.11.10 Central Valley EyePACS Diabetic Retinopathy Screening Project
- •2.12.1 Diabetic Retinopathy
- •2.12.1.1 ADA Guidelines Terms
- •2.12.1.2 Vitrectomy
- •References
- •3: Stereopsis and Teleophthalmology
- •3.1 Introduction
- •3.2 History of Stereopsis and Stereopsis in Ophthalmology
- •3.3 Technology and Photography
- •3.3.3 Imaging Fields
- •3.3.4 Image Viewing Techniques
- •3.3.5 Image Compression
- •3.4 Stereoscopic Teleophthalmology Systems
- •3.4.1 University of Alberta
- •3.4.4 Joslin Vision Network
- •3.5 Conclusion
- •References
- •4.1 Introduction
- •4.2 Methods
- •4.2.1 Main Outcome Measures
- •4.3 Results
- •4.3.1 Retinal Video Recording Versus Retinal Still Photography
- •4.3.2 Video Compression Analysis
- •4.4 Discussion
- •References
- •5.1 Introduction
- •5.1.1 Automated, Remote Image Analysis of Retinal Diseases
- •5.1.2 Telehealth
- •5.2 Design Requirements
- •5.2.1 Telehealth Network Architecture
- •5.2.2 Work Flow
- •5.2.3 Performance Evaluation of the Network
- •5.3 Automated Image Analysis Overview
- •5.3.1 Quality Assessment Module
- •5.3.2 Vascular Tree Segmentation
- •5.3.3 Quality Evaluation
- •5.4 Anatomic Structure Segmentation
- •5.4.1 Optic Nerve Detection
- •5.4.2 Macula
- •5.4.3 Lesion Segmentation
- •5.4.4 Lesion Population Description
- •5.4.5 Image Query
- •5.5 Summary
- •References
- •6.1 Introduction
- •6.3 Optical Coherence Tomography to Detect Leakage
- •References
- •7.1 Introduction
- •7.2 Patients and Methods
- •7.2.1 Participants
- •7.2.2 Methods
- •7.2.3 Statistics
- •7.3 Results
- •7.3.1 Reliability of Image Evaluation
- •7.3.2 Prevalence of Glaucomatous Optic Nerve Atrophy
- •7.4 Discussion
- •7.5 Perspectives
- •References
- •8.1 Introduction
- •8.1.2 Homology Between Retinal and Systemic Microvasculature
- •8.1.3 Need for More Precise CVD Risk Prediction
- •8.2.1 Retinal Microvascular Signs
- •8.2.2 Retinal Vessel Biometry
- •8.2.3 Newer Retinal Imaging for Morphologic Features of Retinal Vasculature
- •8.3 Associations of Retinal Imaging and CVD Risk
- •8.3.1.1 Risk of Pre-clinical CVD
- •8.3.1.2 Risk of Stroke
- •8.3.1.3 Risk of Coronary Heart Disease
- •8.3.2.1 Risk of Hypertension
- •8.3.2.2 Risk of Stroke
- •8.3.2.3 Risk of Coronary Heart Disease
- •8.3.2.4 Risk of Peripheral Artery Disease
- •8.3.3 Newer Morphologic Features of Retinal Vasculature
- •8.4 Retinal Imaging and Its Potential as a Tool for CVD Risk Prediction
- •References
- •9.1 Alzheimer’s Disease
- •9.2 Treatments
- •9.3 Diagnosis
- •9.6 Conclusions
- •References
- •10.1 Introduction
- •10.1.1 Stroke
- •10.1.2 Heart Disease
- •10.1.3 Arteriovenous Ratio
- •10.2 Purpose
- •10.3 Method
- •10.3.1 Medical Approach
- •10.3.2 Technical Approach
- •10.3.3 Output of Medical Data
- •10.4 Patients
- •10.5 Results
- •10.5.1 Medical History
- •10.5.2 Telemedical Evaluation of Retinal Vessels
- •10.5.2.1 Prevalence of Retinal Microangiopathy
- •10.5.2.2 Arteriovenous Ratio
- •10.5.2.3 PROCAM-Index
- •10.6 Discussion and Perceptive
- •10.6.1 Estimation of “Stroke Risk” Estimated by the Stage of Retinal Microangiopathy
- •References
- •11.1 Introduction
- •11.2 System Requirements
- •11.3 Fundus Camera
- •11.4 Imaging Procedure
- •11.4.1 Reading Center Procedure
- •11.5 Detection of Macular Edema
- •11.6 Implementation
- •11.7 Unreadable Images
- •11.7.1 Impact on Overall Diabetic Retinopathy Assessment Rates
- •11.7.2 Compliance with Recommendations
- •11.7.3 Challenges
- •11.7.4 Summary
- •References
- •12.1 Screening
- •12.2 Background
- •12.3 Historical Perspective in England
- •12.4 Methodology
- •12.4.1 The Aim of the Programme
- •12.5 Systematic DR Screening
- •12.6 Cameras for Use in the English Screening Programme
- •12.7 Software for Use in the English Screening Programme
- •12.9 Implementation in England
- •12.11 Quality Assurance
- •12.12 The Development of External Quality Assurance in the English Screening Programme
- •12.13 Information Technology (IT) Developments for the English Screening Programme
- •12.14 Dataset Development
- •12.15 The Development of External Quality Assurance Test Set for the English Screening Programme
- •12.16 Failsafe
- •12.17 The Epidemic of Diabetes
- •References
- •13.1 Introduction
- •13.2 Burden of Diabetes and Diabetic Retinopathy in India
- •13.3 Diabetic Retinopathy Screening Models
- •13.4 Need for Telescreening
- •13.5 Guidelines for Telescreening
- •13.6 ATA Categories of DR Telescreening Validation
- •13.7 Yield of Diabetic Retinopathy in a Telescreening Model
- •13.8 How Are Images Transferred
- •13.10 How Many Fields Are Enough for Diabetic Retinopathy Screening
- •13.11 Is Mydriasis Needed While Using Nonmydriatic Camera?
- •13.12 Validation Studies on Telescreening
- •13.12.1 Accuracy of Telescreening
- •13.12.2 Patient Satisfaction in Telescreening
- •13.12.3 Cost Effectivity
- •13.12.4 Telescreening for Diabetic Retinopathy: Our Experience
- •13.13 Future of Diabetic Retinopathy Screening
- •References
- •14.1 Introduction
- •14.2 Methods
- •14.3 Discussion
- •14.4 Conclusion
- •References
- •15.1 Introduction
- •15.1.1 Description of the EADRSI
- •15.5 State Support of Screening in the Safety Net
- •15.7 Screening Economics for Providers
- •15.8 Patient Sensitivity to Fees
- •15.9 Conclusion
- •References
- •16.1 Introduction
- •16.2 Setting Up the New Screening Model
- •16.2.1 Phase 1: Training
- •16.2.2 Phase 2: Evaluation of Agreement
- •16.2.3 Phase 3: Implementation of the Screening Model
- •16.3 Technologic Requirements
- •16.3.1 Data Management
- •16.3.2 Data Models
- •16.3.2.1 Data Scheme for Patient-Related Information
- •16.3.2.2 Data Scheme for Images
- •Fundus Camera VISUCAM Pro NM
- •PACS Server
- •ClearCanvas DICOM Visualizer
- •16.4 Results
- •16.4.1 Phase 2: Agreement Evaluation
- •16.4.2 Phase 3: Implementation of the Screening Model
- •16.5 Discussion
- •16.5.1 Evaluation of the Screening Model
- •16.5.2 Prevalence of DR
- •16.5.3 Quality Evaluation
- •16.6 Conclusion
- •References
- •17.1.3 Examination and Treatment
- •17.1.4 Limitations of Current Care
- •17.2 Telemedicine and ROP
- •17.2.2 Accuracy and Reliability of Telemedicine for ROP Diagnosis
- •17.2.3 Operational ROP Telemedicine Systems
- •17.2.4 Potential Barriers
- •17.3 Closing Remarks
- •17.3.1 Future Directions
- •References
- •18.1 Introduction
- •18.2 Neonatal Stress and Pain
- •18.3 ROP Screening Technique
- •18.4 Effect of Different Examination Techniques on Stress
- •18.5 Future of Retinal Imaging in Babies
- •References
- •19.1 Introduction
- •19.2 History of the Program
- •19.3 Telehealth Technologies
- •19.4 Impact of the Program
- •Selected References
- •Preamble
- •Introduction
- •Background
- •The Diabetic Retinopathy Study (DRS)
- •Mission
- •Vision
- •Goals
- •Guiding Principles
- •Ethics
- •Clinical Validation
- •Category 1
- •Category 2
- •Category 3
- •Category 4
- •Communication
- •Medical Care Supervision
- •Patient Care Coordinator
- •Image Acquisition
- •Image Review and Evaluation
- •Information Systems
- •Interoperability
- •Image Acquisition
- •Compression
- •Data Communication and Transmission
- •Computer Display
- •Archiving and Retrieval
- •Security
- •Reliability and Redundancy
- •Documentation
- •Image Analysis
- •Legal Requirements
- •Facility Accreditation
- •Privileging and Credentialing
- •Stark Act and Self-referrals
- •State Medical Practice Acts/Licensure
- •Tort Liability
- •Duty
- •Standards of Care
- •Consent
- •Quality Control
- •Operations
- •Customer Support
- •Originating Site
- •Transmission
- •Distant Site
- •Financial Factors
- •Reimbursement
- •Grants
- •Federal Programs
- •Other Financial Factors
- •Equipment Cost
- •Summary
- •Abbreviations
- •Appendices
- •Appendix A: Interoperability
- •Appendix B: DICOM Metadata
- •Appendix C: Computer-Aided Detection
- •Appendix D: Health Insurance Portability and Accountability Act (HIPAA)
- •Appendix F: Quality Control
- •Appendix H: Customer Support
- •Level 1
- •Level 2
- •Level 3
- •Appendix I: Reimbursement
- •Medicare
- •Medicaid
- •Commercial Insurance Carrier Reimbursement
- •Other Financial Factors
- •Disease Prevention
- •Resource Utilization
- •American Telemedicine Association’s Telehealth Practice Recommendations for Diabetic Retinopathy
- •Conclusion
- •References
- •Contributors
- •Second Edition
- •First Edition
- •Index
5 Automated Image Analysis and the Application of Diagnostic Algorithms in an Ocular Telehealth Network 51
network were manually labeled as good quality and poor quality. The feature vectors for these images were then used as training data for a supervised learning method, and then an additional set of images were submitted to the trained classifier. The resulting quality measurement (from 0.0 to 1.0) was reviewed by an ophthalmologist (E. Chaum), and a threshold was set to separate acceptable and unacceptable quality images.
In our practical setting, we note that our imaging protocol uses single-field, macula-centered images. Our quality assessment therefore takes advantage of the fact that the vessel tree will have a distinct shape with small changes from patient to patient due to retina physiology. We also do not require assignment of “right eye” and “left eye” since the images submitted to the telemedical network are already labeled as such by the fundus camera platform. Since our method is sensitive to right eye and left eye, we actually use two quality assessment instances for the right and left eyes.
Another practical concern is the oversight of the clinician. Images which fail the quality assessment can still be submitted to the telemedicine network but are labeled by the clinician as “Best We Can Do” to indicate that higher-quality images were not obtainable. In fact, this label is also intended to resolve possible limitations in the quality assessment, improving tolerance to the possibility of errors.
5.4Anatomic Structure Segmentation
Vessel segmentation is technically an anatomic structure segmentation method, but in our functional description, we group it as part of the quality assessment. Our main anatomic structure elements thus are optic nerve detection, which utilizes the vessel segmentation, and macula localization, which uses both the optic nerve detection and the vessel segmentation.
5.4.1Optic Nerve Detection
As in vessel segmentation, optic nerve detection has been the subject of much research due to its
use as a landmark in the retina and as a tool for diagnosis of diseases which manifest in the optic nerve (such as glaucoma). Some examples include [29–33, 34]. Our work uses two main methods. The first method, shown in Fig. 5.4, uses characteristics of the vessel segmentation [35]. Some optic nerve detection methods try to emphasize successful detection in the face of uncertain vessel segmentation, but in our system we do not regard this as an issue because images with poor vessel segmentation will fail quality assessment and thus will either not be submitted or will be passed directly to the reviewing ophthalmologist.
In our method, four features are generated at each pixel: three are derived from the segmented vessel image, and one is from the actual image itself. For all features, a window around the target pixel is utilized. The first feature extracted from the vessels is a measure of the vessel thickness which is measured by thinning the vessels in the window and measuring the distance between the thinned result and original segmentation perpendicular to the vessel direction. The second feature is the orientation of the vessels, which is measured with a directional filter and scaled to emphasize vertical vessels. The third feature is the density of the vasculature tree. The final feature is the brightness of the windowed region. A training set of images is used with hand-labeled optic nerve (ON) centers. The feature values within the ON radius are used to estimate the parameters of a four-dimensional Gaussian distribution describing the ON regions. Feature values exterior to the ON region are similarly used to estimate the non-ON area with another Gaussian distribution. We also use the handsegmented training set to estimate the ON center probability density function (PDF) which is utilized because in our imaging protocol, images are macula centered. A likelihood ratio is computed, and the best ON location is chosen as the maximum of the likelihood ratio. In [35] results of the algorithm on two difficult data sets are shown, but our evaluated performance with our network images has resulted in even better performance (over 99% accuracy).
While we do not see vascular segmentation as an issue, we have also studied a complementary method which we regard as key to providing
52 |
|
T.P. Karnowski et al. |
|
|
|
a |
b |
c |
d |
e |
f |
Fig. 5.4 Anatomic localization. The original image (a) undergoes vascular segmentation during the quality estimation process (b). The resulting vascular tree and original image are processed to produce four pseudoimages that represent spatially varying estimates of the brightness, vessel density, thickness, and angle (c). These are processed
using a pattern recognition system to produce an estimate of the optic nerve location (d). The optic nerve location and vessel tree are used to produce a parabolic model (e) which is used to estimate the macula position (f) based on the angle of tilt from the parabolic model and known average distances between the macula and optic nerve
physician oversight. This method is based on model-based method of [36] which uses principal component analysis (PCA) on a set of manually labeled optic nerves. We extended the method in [37] to include labeled information using linear discriminant analysis (LDA). The performance of the PCA-LDA method was shown to be superior to that of PCA alone. More importantly, we can use the two complementary optic nerve location methods to estimate the accuracy of the measurement. We have shown that a measurement of the distance between the two estimates serves as a good indicator of optic nerve location confidence. In practice, images that exceed the threshold are referred to the reviewing ophthalmologist [38].
a parabolic model to the tree, as shown in Fig. 5.4. Some “noise” is removed by deleting vessel branches that are smaller in thickness since the main “trunk” of the vessels allows a better leastsquare fit to the parabolic model. The pixel coordinates of the main trunk are thinned and are fit to a parabola using the optic nerve estimate as the locus. A nonlinear least-squares algorithm [39] is utilized; this is similar to the work in [29], but our problem here is simpler because we only solve for the orientation and curvature parameters of the parabola. The resulting orientation is used to estimate the fovea position by applying the mean of the optic nerve-to-macula distances from an image training set.
5.4.2 Macula |
5.4.3 Lesion Segmentation |
Our macula location algorithm is described fully in [35] and summarized here. The method uses the successful vascular tree segmentation and fits
There are many approaches to lesion segmentation in the literature (see reviews in [14, 40, 50, 51]). A notable ongoing project which uses a publicly
5 Automated Image Analysis and the Application of Diagnostic Algorithms in an Ocular Telehealth Network 53
40
30
20
25 20 15 10 5 0 0 10
b
a
d
c
Fig. 5.5 Microaneurysm detection using the Radon Cliff operator (top row). The Radon transform is taken in multiple windows of the input image. Regions where a microaneurysm is present have a distinctive, cliff-like shape in the Radon transform as shown. Exudate detection
(bottom row). Retinal image (a); image without background (b); multidirectional edges detected (Kirsch method) (c); and likelihood of exudate for each lesion cluster (d). In the images with pseudocolors, the blue corresponds to 0 and red to 1
available database and evaluation method for algorithm comparison is the Retinopathy Online Challenge [41]. In our work, our main driver is diabetic retinopathy, and consequently our initial focus is on the main indicators of this disease. Microaneurysms are focal dilatations of retinal capillaries from 10 to 100 m in diameter that appear as small red dots in a fundus image. Exudates are yellowish in appearance and are sharp, bright structures caused by fluid leakage. We note that other lesion types (such as hemorrhages and drusen) are also important in assessing the retina disease state, and they are the subject of future research in this area.
Our main algorithm for the segmentation of microaneurysms [42] uses the “Radon Cliff” operator. After performing a background removal process, the Radon transform is performed on sliding circular windows through the image.
Microaneurysms have a Gaussian-like circular structure, and these structures create a “cliff-like” structure in the Radon transform. This method has several advantages over existing microaneurysms detectors: the size of the lesions can be unknown, it automatically distinguishes lesions from the vasculature in general, and it provides a fair microaneurysm localization even without postprocessing the candidates with machine learning techniques. The latter property allows for simpler training phases, although it is recognized that using supervised learning can reduce the number of false positives. An example is shown in Fig. 5.5.
In our work, we have developed one of the few exudate detection algorithms which work without any previous training. First, the natural pigmentation of the retina (background) is estimated using a large median filter and adapted to the original
54 |
T.P. Karnowski et al. |
|
|
image via a morphological reconstruction operation. After its removal, the lesion candidates are selected via blob analysis of the structures that appear brighter than the retina pigmentation estimate. Finally, the likelihood of being an exudate is estimated on each candidate by normalizing the edge strength of the original image that overlays the given blob. We assessed the algorithm performance using a dataset of 169 fundus images collected from the telemedicine network with a diverse ethnic background (59% AfricanAmerican, 28% Caucasian, 10% Hispanic, and 3% other). The algorithm detects on average 58% of the exudates per image and detects lesions on 100% of the images with retinal lesions. As a final note, we have also developed an exudate segmentation algorithm which explicitly addresses the problem of reflecting artifacts due to the nerve fiber layer (NFL), the structure of which is often accentuated by the illumination light of the camera in young patients with dark pigmented retinas. Details are covered in [43].
search by computing a similarity measure between the query image feature vector and the reduced feature vector image set. In our initial stage of development, we have skipped the rapid search because the database size has been sufficiently small; however, as the CBIR archive increases in volume, fast and efficient methods for searching must be used. Our methodology has been described in detail in [44–46].
The developed CBIR method uses the retrieval response to our query image to estimate the posterior probability of each defined disease state. The retrieval process is similar to a k-nearest neighbor (k-NN) method [47], as nearest neighbor classifiers function by locating the population of labeled data points nearest to an unknown point in index space for a specified number of neighbors, k. In our case, we create the posterior probability using a weighted summation of similarities. As in k-NN classifiers, the estimate approaches a nearly optimal posterior estimate as the number of records in the system increases, meaning the diagnostic performance of the archive will theoretically improve as the archive population increases [47]. We have
5.4.4Lesion Population Description also incorporated a confidence value using Poisson
The detected lesions are used to create an overall fundus descriptor or “lesion population” metric. Currently, we create a set of 170 features which describe the distribution of lesions, including the sharpness of the lesion edges, their intensity, and shape properties. This high-dimensional vector is then reduced to a lower dimension using labels of the different disease states which are ground truthed by the oversight physician in the process of building the archive. In our work, we have used linear discriminant analysis (LDA) as our dimensionality reduction technique. The resulting projected vector set creates an index which is used to perform image queries as detailed in the next section.
5.4.5Image Query
Image retrieval is performed using the lesion detection and population description algorithms. The reduced feature space is then used for a rapid
statistics [45, 46] which are applicable to phenomena of a discrete nature (such the rate of disease occurrence in patients).
We validated the method in [45, 46] by using two independent sets of image data: an image archive of 1,355 macula-centered images obtained from a DR screening program in the Netherlands [48, 49] and a second image set of 98 images from a Native American population [10]. We used a statistical hold-one-out (HOO) procedure to determine the expected performance of the system, achieving sensitivity of 90% and positive predictive value of 95%. Since HOO performance often presents slightly higher expected results than is generally noted from truly independent data, we used the Native American population data set (courtesy of Dr. Matthew Tennant) for comparison. With a quality metric threshold of 0.5 and a Poisson confidence level of 3s confidence, we achieved sensitivity and positive predictive value of 82% and 89%, respectively. These results show a level of robustness to data collection methods and image sets.
