Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Computational Methods for Feature Detection in Optical Images

background pixels:

D(x, y)

=

σN(x, y)

 

.

(2.4)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The SA and SM values are the standard deviation and mean intensity for location (x, y) in neighborhood N, which is deemed part of the background. This method has also been recently implemented using a nonuniform sampling grid.5 The sampling points are chosen at nonuniform locations in the r and θ directions, with point density increasing with r.

2.2.2. Image Normalization and Enhancement

Like many other fields of imaging research, most retinal images must undergo transformations of color and/or intensity channels to enhance contrast for increased discrimination for further processing stages. Several methods achieve image enhancement, ranging from simple nonlinear spatial transformations as discussed in Sec. 2.2.1 to spectral analysis in the Fourier domain.8 The typical result of image normalization and enhancement operations is a change in the distribution of the image color/intensity probability density function (PDF) to increase the area of useful pixel values without significantly degrading the image. Several such methods that have been applied to retinal image research are described in detail below.

2.2.2.1. Color channel transformations

Due to the interand intra-image variability of retinal color brought on by factors such as age, camera, skin pigmentation, and iris color, color normalization is a necessary step in retinal image studies.1 A typical retinal fundus image will produce three M × N pixel matrices contributing to the red, green, and blue (RGB) color model based on the Cartesian coordinate system (Fig. 2.4).

A more intuitive model for human interpretation, HSI (hue, saturation, and intensity), decouples the intensity from color-carrying information. Transforming retinal images from the RGB model allow for the normalization of the intensity channel without changing color value information in hue and saturation levels, which has proven effective as a preprocessing step in the localization of the several retinal features.9 To convert an RGB

45

Michael Dessauer and Sumeet Dua

Fig. 2.4. RGB color cube used in three-channel retinal fundus imaging.

image (normalized to range [0, 1]) to HSI, the following series of equations employed each pixel location8:

Hue component: H

=

360 θ

if

 

B > G

 

 

 

 

 

 

 

θ

 

 

if

 

B G

with

 

 

 

 

 

=

 

 

 

[(R G)2 + (R B)(G B)] 2

θ

 

cos1

 

 

 

21

[(R G) + (R

B)]

 

 

,

 

 

 

 

 

 

 

 

1

 

Saturation component: S =

1

 

 

 

3

 

 

 

[min(R, G, B)], and

 

 

 

 

 

 

 

 

 

(R

+

G

+

B)

 

 

 

1

 

 

 

 

 

 

 

 

 

 

Intensity component: I =

(R + G

+ B).

 

 

 

 

 

 

 

 

 

 

 

 

3

 

 

 

 

 

Using these equations, converted retinal intensity (I) images can be further enhanced to provide sharp retinal feature contrast for better segmentation and classification results (Fig. 2.6).

46

Computational Methods for Feature Detection in Optical Images

Fig. 2.5. (a) Example smoothing kernel, (b) image convolution at location I (2,2).

Fig. 2.6. Preprocessing steps of converting RGB to HSI, intensity smoothing, and adaptive local contrast enhancement outputs.

2.2.2.2. Image smoothing through spatial filtering

Spatial filters are used in the preprocessing stages to reduce noise and help connect edge and object regions by combining neighborhood pixels to

47

Michael Dessauer and Sumeet Dua

transform central pixel intensity or color value. The Gaussian kernel g(x, y) is formalized as10:

g(x, y) =

2πσ2 e

2σ2

,

(2.5)

 

1

 

x2+y2

 

 

where σ determines the width of the kernel. A smoothing mask or kernel is a matrix of coefficients (usually rectangular or circular) that is convolved with an image. The coefficients increase toward the center of the kernel to give larger weight to the pixel locations closest to the central pixel of the passing image window. An example of a smoothing filter is provided below along with the convolution summation.

G = w1 I(1,1) + w2 I(1,2) + · · · + w9 I(3,3)

(2.6)

3 3

 

 

 

= wi Ii.

(2.7)

i=1

Performing smoothing operations on an image will reduce sharp transitions, which can lead to loss of edge information. Experimentation with best size and distribution of smoothing coefficients is required to find the best results for a particular image. A Gaussian smoothing function has been implemented to reduce noise in Ref. [8] to ensure that subsequent contrast enhancement does not amplify noisy data (Fig. 2.6).

2.2.2.3. Local adaptive contrast enhancement

In the image processing community, “contrast” describes the amount of variation that occurs within segments of an image. Although noisy images can contain high contrast, images with high levels of contrast between anatomical regions allow for more discrimination in the segmentation stage. Locally adaptive methods can provide enhanced contrast using statistics in a local neighborhood, which use information from pixels in similar regions. We describe a locally adaptive contrast enhancement method that has been successfully implemented on retinal images.11 For an image f , consider a sub-image W of size M × M centered at about (i, j). Denote the mean and standard deviation of the intensity image within W by f W and σW(f). We will find a point transformation dependent on W , so that a local intensity distribution will spread across the intensity range. W is found to be large

48