Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Thomas P. Karnowski et al.

Fig. 14.7. Example of retinal vessel segmentation.

method of choice. The study found the method of Ref. [35] virtually identical to that of the second observer. However, several methods scored almost as well. In our work, we have implemented one of the studied methods.36,37 In their method, linear elements are identified and enhanced by the use of morphological reconstruction techniques38 followed by filtering and noise suppression techniques. For our current application, we are less interested in using the vascular tree to identify disease states and more interested in its measure of image quality and the localization of the ON and fovea. Our implementation in the telemedical network is fast, with a mean time estimate of less than one second running on a 1.66-GHz Intel-based PC server. An example of the image segmentation is shown in Fig. 14.7.

14.2.2. Quality Assessment

QA of fundus images is another topic of interest in the literature, and various methods have been proposed.3942 In our application, we found that most methods were either computationally too expensive or did not perform as well as desired. Thus, the QA method we use is ideal for a telemedicine network, for which it was created, due to its speed. The method allows quick feedback to the camera operator and requires no more computational power than is available on a regular personal computer commonly used in a fundus camera platform.We note, however, that while the method can be run on such a platform, in our current implementation, the quality check is run on the server for development purposes. We summarize the method here and note that more detail is available in Ref. [43].An overview of the method is shown in Fig. 14.8. First, as discussed above, a segmentation of the vascular tree is

432

Automating the Diagnosis, Stratification, and Management of DR

Fig. 14.8. Image QA processing.44

conducted. The image of the vascular tree is divided into annular and wedgeshaped regions to estimate accurately the coverage of the vascular tree. Poorquality images will not achieve adequate coverage in some regions of the vessel coverage and, thus, will allow a pattern recognition engine to identify the image as an outlier (and, thus, of insufficient quality). The color of the fundus is also measured by using a histogram of the RGB values. These measurements are combined to build a complete feature vector that describes these aspects of the image, which our engineering efforts have indicated are key to quality. During development, a training set of images were manually labeled as good quality and poor quality, then a pattern recognition engine

433

Thomas P. Karnowski et al.

was trained to distinguish the two classes based on the measured numeric values. After creating the classifier, an additional set of unseen images was classified as a testing set, and a reviewing ophthalmologist (E. Chaum) sets a threshold. Note that this initial threshold is set to provide images of sufficient quality to be diagnosed by a human; generally, we note that this quality level may not be sufficient for automated diagnosis, and we show that improved results can be obtained when submitted images are of better quality images. An example of images of different quality from our telemedical network is shown in Fig. 14.9.

One important requirement in our implementation is standard imagecapturing methods as given by the fundus camera. Our imaging protocol uses single-field, macula-centered images so that the vessel tree always has a distinctive shape that changes only slightly with the retinal physiology and due to the positioning of the macula or eye movement during imaging.

Fig. 14.9. Examples of acquired images of different qualities. A threshold of 0.4 was established by empirical methods.

434

Automating the Diagnosis, Stratification, and Management of DR

In addition, the images submitted to the telemedical network are already labeled as right or left eye, and our method by virtue of its training examples is sensitive to this fact. Thus, we have two QA instances running, one for the left eye and the other for the right eye. Another issue is the field of view of the fundus photography; our original intention was to work with a 45-degree field of view, and, in some of our installations, this is the case, but others have begun to utilize 30-degree field of view, because it is easier to get consistent images in that setting. This setting is easily customizable by multiple classifiers, as well, and it is very simple to identify these different conditions based on the metadata of the images submitted.

We also note that the telemedical network allows the operator to have a “best-we-can-do” setting (BWCD). In this “mode,” if an image fails a quality check, it may still be submitted for physician review. This option allows the system to (1) allow for potential failures in the quality estimation process, since the threshold chosen is not foolproof, (2) allow for the possibility that additional fundus images cannot be obtained, either because the patient has left or is otherwise unavailable, and (3) allow for possible intrinsic difficulty in obtaining fundus images from patients. In these cases, the image quality may still be sufficient for manual review. We save the vasculature tree segmentation, which can then be used in the ON detection method without having to be recomputed. Results from the initial network are shown in Fig. 14.10 where the quality levels of the images collected during the first eight months of operations for our first clinic are shown. We see from these results that the QA module is working to improve the quality of images submitted.

14.2.3. ON Detection

ON detection is a mature field, and there are many different ON detection methods that have been applied to a wide variety of data sets (see, for example, Refs. [15, 35, 45, and 46], which reference a publicly available STARE database). In our work, we have studied and implemented two methods. The first method relies on extracted features of the vessel segmentation.47 Some methods emphasize that vessel segmentation can be a “weak link” in the estimate of the ON location. However, in our implementation, this problem is not an issue, because images with poor vessel segmentation will

435

Thomas P. Karnowski et al.

Fig. 14.10. Quality levels over an eight-month period. The overall quality of the image capture has improved after a roughly 2–3 month initial learning phase.

result in a failure in the QA and bypass automatic analysis. We summarize the processing portion of this method here.

The segmented vasculature tree of the image is utilized, and a set of four features are generated at each pixel by a neighborhood operation. The first feature is the brightness of the region. The remaining three features are measured on the vascular tree. The thickness of the vessels is measured by taking vessels in an area of interest and performing a thinning operation, which reduces all vessels to a single-pixel thickness. The difference between the thinned result and the original segmentation is measured perpendicular to the vessel direction and averaged within the neighborhood of the target pixel to estimate the thickness of the tree in that region. The second feature is the orientation of the vessels, which is estimated by measuring the vessel directions, scaled so that vertical vessels are emphasized over horizontal vessels. Finally, a measure of the density of the vasculature tree is measured in each neighborhood as well by simply summing up the number of estimated vessel pixels in this region. These operations are performed on a training set of images with hand-labeled ON centers. Feature values within the ON radius are chosen as examples of “ON,” and all others are chosen as examples of “non-ON.” These values are then used to estimate

436

Automating the Diagnosis, Stratification, and Management of DR

the parameters of a 4D Gaussian distribution describing the ON regions and the non-ON regions. We also use the hand-segmented training set to estimate the ON center probability density function (PDF). The use of this model is justified, because, as mentioned earlier, our imaging protocol is macula-centered images. The Gaussian parameters and the PDF are used to compute a likelihood ratio function using maximum a posteriori (MAP) estimation. Results on a pair of data sets are shown in Ref. [47], but these are more difficult data sets than seen in a true telemedical network. In our implementation, the results of preliminary testing are more accurate. These results are shown in Fig. 14.11.

Because one of our goals in our implementation is a level of physician oversight, we have also studied a complementary method of ON detection that does not rely on vascular tree segmentation. This method, which we will identify as the PCA-LDA method, is based on the model-based method of Ref. [18], which uses principal component analysis (PCA) to capture the information content of a training set of ON images. In their work, “candidate regions” of a retina image are projected to PCA space and then used to reconstruct the regions. The pixel with the smallest residual error is chosen as the ON location. In Ref. [48], we extended the method to include labeled

Fig. 14.11. Histogram of ON location estimates for a test set from the network. A normalized distance of 0.5 or smaller indicates the ON location estimate.

437