Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Computational Methods for Feature Detection in Optical Images

Fig. 2.1. (a) Matrix representation of an image, (b) RGB retinal image, (c) green channel of RGB image, and (d) 3D visualization of green channel intensities.

difficult, trial and error undertaking, with best results typically generated after multiple attempts. In this chapter, we aim to provide an overview of multiple methods to achieve automatic retinal image pathology classification, providing a starting point for a medical researcher or computational scientist to understand image processing used in this field of research.

2.2. Preprocessing Methods for Retinal Images

Image preprocessing is the initial stage of image analysis in which low-level operations are performed on either global or local image areas to reduce noise and enhance contrast. These enhancement steps attribute to significant gains in the quality and accuracy of object detection, segmentation, and feature extraction for classification by removing anomalous image data caused by illumination effects or camera acquisition noise and increasing intra-image contrast from one object to another. Inter-image normalization can also increase automated retinal imaging research results by attenuating image sequences for differences in camera specifications, illumination, camera angle, and retinal pigmentation.1 The following section will discuss several methods that provide the necessary preprocessing operations required for successful retinal pathology classification.

2.2.1. Illumination Effect Reduction

Illumination effects due to changes in reflectance cause nonuniform variance in pixel intensities across an image. These changes in pixel intensity

41

Michael Dessauer and Sumeet Dua

are due to varying retinal reflectivity and background fluorescence from the retinal capillary network.2 Other factors contributing to nonuniform illumination include varying degrees of pupil dilation, involuntary eye movement, and the presence of a disease that changes normal reflectively properties.3 These effects can hinder the segmentation of the optical anatomy due to the illumination causes shading of artifacts and vignetting.4 These changes in the local image statistics result in characteristic variation from the normal optical anatomy pixel representation, which leads to misclassification through weak segmentation and feature extraction. We describe several methods that decrease illumination effects on retinal image segmentation and feature extraction, which range from simple and local to complex and global operations.

2.2.1.1. Non-linear brightness transform

A direct, global method to adjust pixel intensity is the application of a brightness transform function to the image. One such function is a nonlinear point transformation that changes only the darker regions of the retinal image, allowing potential features to be detected in subsequent steps5:

y = β xα,

(2.2)

where x is the original pixel intensity, y is the adjusted image pixel, and 0 α 1, β = inmax1α, inmax is the upper limit of intensity in the image. By selecting appropriate parameters α and inmax, an illumination effects-corrected image can be created (Fig. 2.2).

A drawback for this method is that global point transformations do not take into account the luminance variations that are caused by the anatomical regions of the eye, thus decreasing feature contrast in certain areas.

2.2.1.2. Background identification methods

Several methods in the literature require finding a background intensity image that can be used to correct the original image for illumination effects. These methods include shade correction through median filtering and background luminosity correction through sampling. We will explain both methods.

As discussed above, the shading of artifacts in an eye image can lead to inaccurate classification. Thus, shade correction methods have been

42

Computational Methods for Feature Detection in Optical Images

Fig. 2.2. Nonlinear point transform of a color retinal image to correct for illumination effect.

developed in the literature. One such technique smoothes an image using a median filter, with the resulting image treated as the background image.

The median filter belongs to the order-statistic family of smoothing spatial filters. These nonlinear filters order pixels in a determined local area and replace the center pixel with the median pixel value. This type of simple filter is effective in reducing optical image illumination while retaining edge data with minimal blurring. This filter has been implemented in the literature for retinal image illumination reduction by first using a large-scale median filter to create a filtered image and then subtracting from the original.6 Only anatomical constituents smaller than the filter size remain for further analysis, providing an illumination invariant description (Fig. 2.3).

This scheme uses assumptions based on domain knowledge of retinal fundus imaging techniques and eye geometry to estimate correction variables for recovering the true image without illumination effects. Because the retina is a curved surface with camera illumination occurring near the center, the image region will appear darker as the distance from the eye center increases. Using a linear model, the relation between the true image U and observed image I is:

U(x, y)

=

I(x, y) SA(x, y)

,

(2.3)

 

 

SM(x, y)

 

43

Michael Dessauer and Sumeet Dua

Fig. 2.3. (a) Raw green channel retinal image, (b) median filter response with window size [250×250], and (c) difference image from the previous images.

where SA is contrast degradation and SM is the luminosity degradation. To estimate these values correctly, it is necessary, first, to extract the background pixels. Several assumptions are made for a pixel of the image in neighborhood N: SA and SM are constant, at least 50% of pixels are background pixels, and background pixels differ from foreground pixels.7

Since eye features have either very high or very low pixel intensities in the green color channel, background pixels (not contained within retinal regions of interest) are automatically chosen by interpolating the sampling points to obtain local neighborhood N mean and variance. Computing the Mahalanobis distance D(x, y) determines if this value is above or below a threshold t. All pixels below the threshold are determined to be

44