Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Computational Methods for Feature Detection in Optical Images

2.3.Segmentation Methods for Retinal Anatomy Detection and Localization

Regions of interest (ROIs) can differ in retinal images depending on the researcher’s purpose for classification. The detection of particular anatomical features is required in both the preprocessing and classification steps. We will use the term “localization” when referring to an anatomical feature, such as blood vessels or an optic disk, that is assumed to be present in every image. We will use the term “detection” to find retinal features that are not assumed to be present always, like pathological indicators such as lesions, cotton wool spots, and drusen. We will discuss the methods and examples of retinal image segmentation, and note the strengths and weaknesses of these methods when gauging which method to implement. The discussion will begin with initial operations for finding local ROI boundaries. Next, we will discuss more advanced methods for image segmentation and detection, and provide example algorithms for finding the specific anatomical features of the retina.

2.3.1. A Boundary Detection Methods

Visually distinguishable region boundaries within an image can be mathematically described as a spatial discontinuity in the image pixel values. The size, length, and variance of this discontinuity depend on the reflectance properties of the region, background, image resolution, illumination effects, and noise. We will use the term gradient to describe these discontinuities. It is important to note that gradients, with both magnitude and direction, have vector qualities. A gradient can be divided into orthogonal components to allow for the combining of discontinuities in multiple dimensions. For example, a 2D image can have a gradient, f , with magnitudes in the x and y directions, with their total magnitude being the norm

f = mag( f) = Gx2 + Gy2

1

 

2 .

(2.16)

The direction of the gradient vector can be calculated from the orthogonal gradient magnitudes as:

α(x, y) = tan1

Gy

,

(2.17)

Gx

53

Michael Dessauer and Sumeet Dua

where this angle is measured from the x-axis. We will discuss several classes of algorithms that find and exploit these discontinuities for retinal image anatomy segmentation.

2.3.1.1. First-order difference operators

We will describe the gradient of an image f at a point (x, y) as the magnitude of the first-order derivatives in both the x and y directions. Matrix kernels are convolved with an image f at each (x, y), which represent derivatives,f/dx and f/dy. These values are then used to find the gradient. The Prewitt and Sobel operators use two 3 × 3 masks with the coefficients of opposite values to calculate gradients in the x and y directions. The gradient magnitude is approximated using the following,

f = |Gx| + |Gy|.

(2.18)

A threshold, t, is then used to determine if an edge exists at location (x, y) iff > t, which results in binary image, BW, at (x, y) = 1 (Fig. 2.10). Sobel differs from Prewitt by using a larger coefficient in the center of the row or column to help smooth the calculation by giving more weight to the fourconnected pixel neighbors. These methods can provide useful information at a small computational expense, but, in practice, are typically only a part of a chain of methods used to obtain reliable results.

2.3.1.2. Second-order boundary detection

Finding second-order approximations of the image in spatial directions can be helpful when combined with smoothing functions and first-order derivatives for detecting corners and localizing boundaries. The Laplacian is an isotropic derivative operator that linearly combines the second-order derivative in the x and y directions as shown

 

2f

=

2f

+

2f

.

(2.19)

x2

 

 

y2

 

The Laplacian value at (x, y) will have large values for locations with discontinuities in both x and y directions, as well as de-emphasize areas with close-to-linear intensity changes. The discrete approximation of the Laplacian can use either fouror eight-connected neighborhoods (Fig. 2.11).

54

Computational Methods for Feature Detection in Optical Images

Fig. 2.10. First-order difference operators for edge detection. Top: original grayscale matrix and image; center: Prewitt kernels and edge response binary images; and bottom: Sobel kernels and edge response binary images.

Fig. 2.11. Matrices of fourand eight-connected Laplacian kernels.

This method does not provide much useful boundary information by itself, but when used with a Gaussian smoothing function or firstorder derivatives, can provide useful information with boundary detection (Fig. 2.12).

55

Michael Dessauer and Sumeet Dua

Fig. 2.12. Top: original grayscale image; bottom left: Laplacian kernel response; bottom center: LoG edge detection; and bottom right: Canny edge detector.

The Laplacian response provides little useful information due to its sensitivity to noise, but when combined with a smoothing function (described in Sec. 2.2.2.2), edges are found at zero-crossing locations. The initial step is to smooth the image with a Gaussian smoothing kernel to reduce noise, formalized as

h(r) = −e

r2

(2.20)

2σ2 ,

where r2 = x2 + y2 and σ is the standard deviation to control the smoothing scale. The convolution of this function blurs an image, thus reducing noise and boundary clarity. The Laplacian of h is then

 

 

= −

 

 

2

σ4

2

 

r2

 

 

2h(r)

 

 

r

 

σ

 

e

2σ2

.

(2.21)

 

 

 

 

 

 

 

The kernel that approximates the LoG (referred to as the Mexican hat function) has a large positive central term (similar to firstand second-order difference operators), then is surrounded by negative values, with zero values

56