Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Digital Teleretinal Screening Teleophthalmology in Practice_Yogesan, Goldschmidt, Cuadros_2012.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
8.89 Mб
Скачать

5 Automated Image Analysis and the Application of Diagnostic Algorithms in an Ocular Telehealth Network 51

network were manually labeled as good quality and poor quality. The feature vectors for these images were then used as training data for a supervised learning method, and then an additional set of images were submitted to the trained classifier. The resulting quality measurement (from 0.0 to 1.0) was reviewed by an ophthalmologist (E. Chaum), and a threshold was set to separate acceptable and unacceptable quality images.

In our practical setting, we note that our imaging protocol uses single-field, macula-centered images. Our quality assessment therefore takes advantage of the fact that the vessel tree will have a distinct shape with small changes from patient to patient due to retina physiology. We also do not require assignment of “right eye” and “left eye” since the images submitted to the telemedical network are already labeled as such by the fundus camera platform. Since our method is sensitive to right eye and left eye, we actually use two quality assessment instances for the right and left eyes.

Another practical concern is the oversight of the clinician. Images which fail the quality assessment can still be submitted to the telemedicine network but are labeled by the clinician as “Best We Can Do” to indicate that higher-quality images were not obtainable. In fact, this label is also intended to resolve possible limitations in the quality assessment, improving tolerance to the possibility of errors.

5.4Anatomic Structure Segmentation

Vessel segmentation is technically an anatomic structure segmentation method, but in our functional description, we group it as part of the quality assessment. Our main anatomic structure elements thus are optic nerve detection, which utilizes the vessel segmentation, and macula localization, which uses both the optic nerve detection and the vessel segmentation.

5.4.1Optic Nerve Detection

As in vessel segmentation, optic nerve detection has been the subject of much research due to its

use as a landmark in the retina and as a tool for diagnosis of diseases which manifest in the optic nerve (such as glaucoma). Some examples include [29–33, 34]. Our work uses two main methods. The first method, shown in Fig. 5.4, uses characteristics of the vessel segmentation [35]. Some optic nerve detection methods try to emphasize successful detection in the face of uncertain vessel segmentation, but in our system we do not regard this as an issue because images with poor vessel segmentation will fail quality assessment and thus will either not be submitted or will be passed directly to the reviewing ophthalmologist.

In our method, four features are generated at each pixel: three are derived from the segmented vessel image, and one is from the actual image itself. For all features, a window around the target pixel is utilized. The first feature extracted from the vessels is a measure of the vessel thickness which is measured by thinning the vessels in the window and measuring the distance between the thinned result and original segmentation perpendicular to the vessel direction. The second feature is the orientation of the vessels, which is measured with a directional filter and scaled to emphasize vertical vessels. The third feature is the density of the vasculature tree. The final feature is the brightness of the windowed region. A training set of images is used with hand-labeled optic nerve (ON) centers. The feature values within the ON radius are used to estimate the parameters of a four-dimensional Gaussian distribution describing the ON regions. Feature values exterior to the ON region are similarly used to estimate the non-ON area with another Gaussian distribution. We also use the handsegmented training set to estimate the ON center probability density function (PDF) which is utilized because in our imaging protocol, images are macula centered. A likelihood ratio is computed, and the best ON location is chosen as the maximum of the likelihood ratio. In [35] results of the algorithm on two difficult data sets are shown, but our evaluated performance with our network images has resulted in even better performance (over 99% accuracy).

While we do not see vascular segmentation as an issue, we have also studied a complementary method which we regard as key to providing

52

 

T.P. Karnowski et al.

 

 

 

a

b

c

d

e

f

Fig. 5.4 Anatomic localization. The original image (a) undergoes vascular segmentation during the quality estimation process (b). The resulting vascular tree and original image are processed to produce four pseudoimages that represent spatially varying estimates of the brightness, vessel density, thickness, and angle (c). These are processed

using a pattern recognition system to produce an estimate of the optic nerve location (d). The optic nerve location and vessel tree are used to produce a parabolic model (e) which is used to estimate the macula position (f) based on the angle of tilt from the parabolic model and known average distances between the macula and optic nerve

physician oversight. This method is based on model-based method of [36] which uses principal component analysis (PCA) on a set of manually labeled optic nerves. We extended the method in [37] to include labeled information using linear discriminant analysis (LDA). The performance of the PCA-LDA method was shown to be superior to that of PCA alone. More importantly, we can use the two complementary optic nerve location methods to estimate the accuracy of the measurement. We have shown that a measurement of the distance between the two estimates serves as a good indicator of optic nerve location confidence. In practice, images that exceed the threshold are referred to the reviewing ophthalmologist [38].

a parabolic model to the tree, as shown in Fig. 5.4. Some “noise” is removed by deleting vessel branches that are smaller in thickness since the main “trunk” of the vessels allows a better leastsquare fit to the parabolic model. The pixel coordinates of the main trunk are thinned and are fit to a parabola using the optic nerve estimate as the locus. A nonlinear least-squares algorithm [39] is utilized; this is similar to the work in [29], but our problem here is simpler because we only solve for the orientation and curvature parameters of the parabola. The resulting orientation is used to estimate the fovea position by applying the mean of the optic nerve-to-macula distances from an image training set.

5.4.2 Macula

5.4.3 Lesion Segmentation

Our macula location algorithm is described fully in [35] and summarized here. The method uses the successful vascular tree segmentation and fits

There are many approaches to lesion segmentation in the literature (see reviews in [14, 40, 50, 51]). A notable ongoing project which uses a publicly

5 Automated Image Analysis and the Application of Diagnostic Algorithms in an Ocular Telehealth Network 53

40

30

20

25 20 15 10 5 0 0 10

b

a

d

c

Fig. 5.5 Microaneurysm detection using the Radon Cliff operator (top row). The Radon transform is taken in multiple windows of the input image. Regions where a microaneurysm is present have a distinctive, cliff-like shape in the Radon transform as shown. Exudate detection

(bottom row). Retinal image (a); image without background (b); multidirectional edges detected (Kirsch method) (c); and likelihood of exudate for each lesion cluster (d). In the images with pseudocolors, the blue corresponds to 0 and red to 1

available database and evaluation method for algorithm comparison is the Retinopathy Online Challenge [41]. In our work, our main driver is diabetic retinopathy, and consequently our initial focus is on the main indicators of this disease. Microaneurysms are focal dilatations of retinal capillaries from 10 to 100 m in diameter that appear as small red dots in a fundus image. Exudates are yellowish in appearance and are sharp, bright structures caused by fluid leakage. We note that other lesion types (such as hemorrhages and drusen) are also important in assessing the retina disease state, and they are the subject of future research in this area.

Our main algorithm for the segmentation of microaneurysms [42] uses the “Radon Cliff” operator. After performing a background removal process, the Radon transform is performed on sliding circular windows through the image.

Microaneurysms have a Gaussian-like circular structure, and these structures create a “cliff-like” structure in the Radon transform. This method has several advantages over existing microaneurysms detectors: the size of the lesions can be unknown, it automatically distinguishes lesions from the vasculature in general, and it provides a fair microaneurysm localization even without postprocessing the candidates with machine learning techniques. The latter property allows for simpler training phases, although it is recognized that using supervised learning can reduce the number of false positives. An example is shown in Fig. 5.5.

In our work, we have developed one of the few exudate detection algorithms which work without any previous training. First, the natural pigmentation of the retina (background) is estimated using a large median filter and adapted to the original

54

T.P. Karnowski et al.

 

 

image via a morphological reconstruction operation. After its removal, the lesion candidates are selected via blob analysis of the structures that appear brighter than the retina pigmentation estimate. Finally, the likelihood of being an exudate is estimated on each candidate by normalizing the edge strength of the original image that overlays the given blob. We assessed the algorithm performance using a dataset of 169 fundus images collected from the telemedicine network with a diverse ethnic background (59% AfricanAmerican, 28% Caucasian, 10% Hispanic, and 3% other). The algorithm detects on average 58% of the exudates per image and detects lesions on 100% of the images with retinal lesions. As a final note, we have also developed an exudate segmentation algorithm which explicitly addresses the problem of reflecting artifacts due to the nerve fiber layer (NFL), the structure of which is often accentuated by the illumination light of the camera in young patients with dark pigmented retinas. Details are covered in [43].

search by computing a similarity measure between the query image feature vector and the reduced feature vector image set. In our initial stage of development, we have skipped the rapid search because the database size has been sufficiently small; however, as the CBIR archive increases in volume, fast and efficient methods for searching must be used. Our methodology has been described in detail in [44–46].

The developed CBIR method uses the retrieval response to our query image to estimate the posterior probability of each defined disease state. The retrieval process is similar to a k-nearest neighbor (k-NN) method [47], as nearest neighbor classifiers function by locating the population of labeled data points nearest to an unknown point in index space for a specified number of neighbors, k. In our case, we create the posterior probability using a weighted summation of similarities. As in k-NN classifiers, the estimate approaches a nearly optimal posterior estimate as the number of records in the system increases, meaning the diagnostic performance of the archive will theoretically improve as the archive population increases [47]. We have

5.4.4Lesion Population Description also incorporated a confidence value using Poisson

The detected lesions are used to create an overall fundus descriptor or “lesion population” metric. Currently, we create a set of 170 features which describe the distribution of lesions, including the sharpness of the lesion edges, their intensity, and shape properties. This high-dimensional vector is then reduced to a lower dimension using labels of the different disease states which are ground truthed by the oversight physician in the process of building the archive. In our work, we have used linear discriminant analysis (LDA) as our dimensionality reduction technique. The resulting projected vector set creates an index which is used to perform image queries as detailed in the next section.

5.4.5Image Query

Image retrieval is performed using the lesion detection and population description algorithms. The reduced feature space is then used for a rapid

statistics [45, 46] which are applicable to phenomena of a discrete nature (such the rate of disease occurrence in patients).

We validated the method in [45, 46] by using two independent sets of image data: an image archive of 1,355 macula-centered images obtained from a DR screening program in the Netherlands [48, 49] and a second image set of 98 images from a Native American population [10]. We used a statistical hold-one-out (HOO) procedure to determine the expected performance of the system, achieving sensitivity of 90% and positive predictive value of 95%. Since HOO performance often presents slightly higher expected results than is generally noted from truly independent data, we used the Native American population data set (courtesy of Dr. Matthew Tennant) for comparison. With a quality metric threshold of 0.5 and a Poisson confidence level of 3s confidence, we achieved sensitivity and positive predictive value of 82% and 89%, respectively. These results show a level of robustness to data collection methods and image sets.