Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Computational Analysis of the Human Eye with Applications_Dua, Acharya, Ng_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
20.45 Mб
Скачать

Computational Methods for Feature Detection in Optical Images

Fig. 2.19. (a) Multi-threshold segmentation, (b) 100 × 100 pixel windowed segmentation, (c) 50 × 50 pixel windowed segmentation, (d) 10 × 10 pixel windowed segmentation.

An effort to segment regions with varying intensities is possible through adaptively setting thresholds in windowed regions of the image. The window size should be small enough so that a given feature should have minimal nonuniform illumination effects. Figure 2.19 demonstrates how window size can alter a simple threshold segmentation of blood vessels for an image with nonuniform illumination effects.

2.3.4. Region-Based Methods for Image Segmentation

Although edge detection and linking methods can provide useful information for boundary detection and segmentation, results are not always reliable when boundaries are obscured or noisy. In most cases, region-based segmentation can provide accurate results without depending on linked boundaries to encapsulate an anatomical retinal feature. We will discuss methods that use discriminatory intra-retinal feature statistics to segment regions from within an image.

2.3.4.1. Region growing

This method takes provided seed point locations and groups surrounding pixels (four or eight-connected neighborhood, for example) together based on predefined statistical similarity. The basic formulation is

n

Ri = R,

(2.30)

i=1

where Ri is a connected region and i = 1, 2, . . . , n, Ri Rj = , for all i and j, i = j, P(Ri) = TRUE for i = 1, 2, . . . , n, and P(Ri Rj ) = FALSE for adjacent Ri and Rj .

65

Michael Dessauer and Sumeet Dua

Similarity measurement, P(Ri), and seed point location decisions are based on domain knowledge of the anatomical feature of interest, usually based on intensity, local mean, standard deviations, or higher level textural statistics. Another necessary parameter is the stopping condition. Although a stopping condition can occur when the similarity measure ceases to find similar pixels, region shape statistics can also be used to improve results when prior feature models are known. A recursive region growing method has been used for the segmentation of yellow lesions, due to their homogenous gray-scale intensity.19 In Fig. 2.20, we show the results of recursive region-growing segmentation on a processed gray-level retinal image when different seed points are chosen. This simple, recursive method can attain powerful segmentation results when anatomical features have continuous regions (as in the image), but fail when regions have large statistical discontinuities from either occlusion or illumination.

2.3.4.2.

Watershed segmentation

 

 

In

this

segmentation approach,

an

intensity image is represented in

a

3D space where the intensity

at

each pixel location (x, y) denotes

height (Fig. 2.21). Although this approach combines operations from both edge detection and morphological operations, the watershed uses regions peaks and valleys (regional maximums and minimums) that would act as

Fig. 2.20. (a) Seed points used in region growing and (b) region growing using seed points and intensity similarity predicate.

66

Computational Methods for Feature Detection in Optical Images

Fig. 2.21. (a) Localized optic disk image, (b) 3D visualization with intensity magnitude in z direction, and (c) sample watershed segmentation.

catchment basins for liquid. Segments are created from this topographical representation, which are connected components lying within a regional minimum and surrounded by a connected regional maximum.

The watershed segmentation algorithm can be conceptualized as follows.20 First, let M1, M2, . . . , Mr be the sets of coordinates of the locations in the regional minima of an image f(x, y), which will be a gradient image calculated from using any of the previous methods. Let C(Mi) be a set of locations of the points in the catchment basin of regional minimum Mi. Let T [n] represent the set of locations (s, t) for which f(s, t) < n, written as:

T [n] = {(s, t)|g(s, t) < n}.

(2.31)

T [n] is the set of locations in f(x, y) below the plane f(s, t) < n. The regional area will then be “flooded” in integer increments, from n = min +1 to n = max +1. Now, Cn(Mi) represents the points located in the basin that are below n, which is written as:

Cn(Mi) = C(Mi) T [n].

(2.32)

This equation gives binary values of one if a location (x, y) belongs to both Cn(Mi) and T [n]. Next, we find

R

 

 

 

 

 

C[n] =

Cn(Mi).

(2.33)

i=1

 

 

Then, C[max +1] is the union of all catchment basins:

 

C[max +1] =

R

 

C(Mi).

(2.34)

 

i=1

 

 

 

 

67

Michael Dessauer and Sumeet Dua

As the algorithm begins at C[min +1] = T [min +1], watershed segmentation members are detected recursively, with step n occurring only after C[n 1] has been found. To find C[n 1] and C[n], let Q be the set of connected components in T [n], so that, each connected component q Q[n], which is that q C[n 1] is empty, contains a single instance of connected components, or contains multiple connected components. C[n] depends on which of these conditions is satisfied:

Empty set occurring when a new minimum is found, adding q into C[n 1] to form C[n],

One connected component, which means q lies within a basin, adding q into C[n 1] to form C[n], and

All or part of a peak ridge separating two or more catchment basins

is found; thus, a “damn” is built (one-pixel thick connecting ridges by dilating q C[n 1] with a 3 × 3 structuring element of ones, and

constraining the dilation to q.

Watershed transform has been used in retinal image analysis to segment the optic disk, using the red channel of the RGB color space.21 A watershed algorithm result is provided. This result shows how the method uses regional minimums and maximums for segmentation (Fig. 2.21).

2.3.4.3. Matched filter segmentation

In the cases where prior models of retinal anatomy are available, we can create kernels (templates) that are then convolved with the image, finding maximal responses at locations of high template matching. Blood vessels in retinal images have intensity characteristics that allow for successful template modeling, considering their piece-wise linear segments and Gaussian intensity profile.22 Using these assumptions, we can formulate the Gaussian curve as:

h(x, y) = A{1 k ed2/2σ2 },

(2.35)

where d is the perpendicular distance between point (x, y) and a straight line passing through the center of the blood vessel along its length, σ gives the intensity profile, and A is the local background intensity. One required step in matched filters that adds to the complexity of this method is that

68

Computational Methods for Feature Detection in Optical Images

the kernel must be rotated to find objects in various orientations. A rotation matrix is used, given by:

r¯i =

cos θi

sin θi

,

(2.36)

sin θi

cos θi

where θi is the orientation with a corresponding point in the rotated coordinate system given by:

 

 

 

 

p¯ t = [u v] = pr¯ iT .

(2.37)

If we divide the orientations into 15increments, 12 kernels are necessary to convolve with the image to find all orientations, with the maximum value chosen for each location (x, y). The Gaussian curve tail is truncated at u = ±3σ, and a neighborhood N is defined as being within u. The weights

in the ith kernel are given as:

= −

 

¯ i

 

 

i

(x, y)

eu2/2σ2

N.

(2.38)

K

 

p

A will denote the number of points in N, with a mean value, mi, of the kernel given as:

mi =

¯

Ki(x, y)

(2.39)

 

.

pi N

A

 

 

 

 

The convolution mask is then given by:

i

= i

m

i

 

p

i

 

N.

(2.40)

K (x, y)

K (x, y)

 

 

 

 

 

In Fig. 2.22, we present the above example of matched filter kernels and segmentation result using a simple magnitude threshold of the maximum

Fig. 2.22. (a) 3D representation of a matched filter used for blood vessel detection, (b) filter bank for 15orientation increments, and (c) vessel segmentation results using low bound threshold.

69