Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Kluwer - Handbook of Biomedical Image Analysis Vol

.2.pdf
Скачиваний:
102
Добавлен:
10.08.2013
Размер:
25.84 Mб
Скачать

472

 

Suri et al.

 

Image Volume

 

 

Build Observation Vector

 

Initial Centroid

Observation Vector

 

 

Get Current Centroid

K

Copy New Centroid

Current Centroid

 

to Current Centroid

 

 

 

 

Membership Function Computation

 

Membership Function

 

 

New Centroid Computation

 

 

New Centroid

 

 

Compute Error ?

 

Save Membership Function, Classified Image and Stop

 

Stop

 

Figure 9.12: Fuzzy C mean (FCM) algorithm. Input is an image volume. An observation vector is built. Initially, the current centroid is given by the initial input centroid and K the number of classes. With the observation vector the membership function is computed, and with it a new centroid is computed. This new centroid is compared to the current centroid, and if the error is too large, the new centroid is copied into the current centroid and the process repeats. Otherwise, if the error is below the threshold, the membership function is saved, and the result is a classified image.

4.Convergence was checked by computing the error between the previous and current centroids ( v( p+1) v( p) ). If the algorithm had converged, an exit would be required; otherwise, one would increment p and go to step 2 for computing the fuzzy membership function again. The output of the FCM algorithm was K sets of fuzzy membership functions. We were interested in the membership value at each pixel for each class. Thus, if there were K classes, then we threw out K number of images and K number of matrices for the membership functions to be used in computing the final speed terms.

Lumen Identification, Detection, and Quantification in MR Plaque Volumes

473

Figure 9.13: Mathemetical expression of the FCM algorithm. Equations for the observation vector, centroid of the class, sum of the membership function, membership computation, centroid computation, and error computation are shown.

9.3.3 Graph-Based Segmentation Method

The graph segmentation method (GSM) segments an image by treating it as a graph G = (V , E) where V the set of vertices are the pixels and E the set of edges are pairs of pixels. Using a weight function w(e), where e is an edge (vi, v j ), the weights of the edges are computed and the edges are sorted by weight in a nondecreasing order. Initially, each pixel vi is segmented into its own component Ci.

For each edge (vi, v j ) in the list, a decision criterion D is applied and a decision is made whether or not to merge the components Ci and C j . After this decision is made on each edge in the list, the result is a list of the final components of the segmented image.

The input image is first smoothed by a given smoothing parameter σ . Input constant k determines the size preference of the components by changing the threshold function τ (C) (see Figs. 9.14 and 9.15).

The decision criteria D is a comparison between the difference between components Ci and C j and the minimum internal difference among Ci and C j . The difference between two components is defined as the minimum weight of the edges that connect the two components:

Dif Ci, C j ) = vi Ci,v j

C j ,(vi,v j ) E w((vi, v j )).

(9.12)

(

min

 

 

 

 

474

Suri et al.

Input Image

Smooth Image Smoothing parameter Smoothed Image; treat as graph G = (V, E ) with n vertices and m edges

Compute weight of edges with weight function w(e), where e is (vi, vj) and (vi, vj), E

List of weights of Edges E

Sort Edges E by nondecreasing edge weight

Edges E sorted by weight

Segment each pixel (vertex vi) into its own component Ci

Constant k

 

 

 

 

 

 

 

 

 

 

n segmented components

 

 

 

 

 

 

For each edge in sorted edge list, apply the decision criterion D to

 

 

 

begin merging components

 

 

 

 

 

 

 

 

 

 

 

 

 

Segmented Image

 

 

Figure 9.14: Graph segmentation method (GSM). The input image is smoothed given a smoothing parameter. The image is treated as a graph, with each pixel treated like a vertex. An edge is a pair of pixels. Using a weight function w(e), the weights of the edges are computed and the edges are listed by weight in a nondecreasing order. Initially, each pixel is segmented into its own component. For each edge in the list, a decision criterion D is applied and the components are merged accordingly. Input constant k determines the size preference of the components by changing the threshold function. The result is a segmented image made up of the final merged components.

The minimum internal difference among two components Ci and C j is defined as the minimum of the sum of the internal difference and the threshold function of each component:

M Int(Ci, C j ) = min(Int(Ci) + τ (Ci), Int(C j ) + τ (C j )),

(9.13)

where the internal difference Int(C) of a component C is defined as the maximum weight in the minimum spanning tree M ST (C, E) of the component:

Int(C) = max w(e),

(9.14)

e M ST (C,E)

Lumen Identification, Detection, and Quantification in MR Plaque Volumes

475

Constant k

NO

n segmented components and list of sorted edges

For each edge (vi ,vj ) in sorted edge list, compute difference between Ci and Cj and compute the minimum internal difference among Ci and Cj.

Dif (Ci, Cj) and MInt (Ci , Cj )

Is Dif (Ci, Cj) > MInt (Ci , Cj )?

 

 

 

NO

 

 

YES

 

 

 

 

 

 

Merge Ci and Cj .

Do not merge Ci and Cj .

Are all edges evaluated?

YES

Segmentation of V into components S = (C1, ..., Cr )

Figure 9.15: Decision criterion D for the graph segmentation method (GSM). After the list of edge weights are sorted and each pixel is segmented into its own component, the decision criterion is D is applied to each edge. The constant k is used in determining the threshold function. First the difference between the two components to which the two pixels making up the edge belong is computed. Then the minimum internal difference among those two components is computed. If the difference between the two components is greater than the minimum internal difference among them, then D applied to the two components is true, and the two components are not merged because there is evidence for a boundary between them. Otherwise, if the difference between the two components is less than or equal to the minimum internal difference, then the D applied to the two components is false, and the two components are merged into one component which contains both pixels of the edge. This decision criterion is applied to all the edges of the list, and the final result is a segmentation of the pixels into components.

and where the threshold function τ (C) is defined as

τ (C) =

k

(9.15)

|C| ,

where k is the input constant and |C| is the size of the component C.

476

Suri et al.

Figure 9.16: Graph segmentation method (GSM) equations. The internal difference of a component is the maximum edge weight of the edges in its minimum spanning tree. The difference between two components is the minimum edge weight of the edges formed by two pixels, one belonging to each component. The threshold function of a component is the constant k divided by the size of that component, where the size of a component is the number of pixels it contains. The minimum internal difference among two components is the minimum value of the sum of the internal difference and the value of the threshold function of each component.

If the difference between the two components is greater than the minimum internal difference among the two components, then the two components are not merged. Otherwise, the two components are merged into one component.

9.4Synthetic System Design and Its Processing

9.4.1 Image Generation Process

The model equation for generation of the observed image is shown in Eq. (9.16).

Iobserve = Ioriginal + η

(9.16)

Lumen Identification, Detection, and Quantification in MR Plaque Volumes

477

Figure 9.17: Synthetic pipeline with σ 2 = 500.

This can be expressed for every pixel as

Iobserve(x, y) = Ioriginal(x, y) + η(x, y)

(9.17)

where η(x, y) N(0, σ 2) and σ 2 is the variance of the noise. N is the Gaussian distribution. The output synthetic image using Gaussian image generation process is shown in the Fig 9.17.

Figure 9.18 shows eight different directions that the core class of the lumen can be with respect to the crescent moon class. The darkest region is the core

Figure 9.18: σ 2 = 500, all directions, large noise protocol. With respect to the center of the lumen area, the core class is shown in eight different orientations. In the top row, from right to left: east, northeast, north; in the second row: northwest, southeast, south; in the third row: southwest, west, west.

478

Suri et al.

Figure 9.19: Images with 10 different variances using large noise protocol. The gray scale model is perturbed with variance (σ 2) varying from 100 to 1000. In the top row, from right to left: σ 2 = 100 and 200; in the second row: σ 2 = 300 and 400; in the third row: σ 2 = 500 and 600; in the fourth row: σ 2 = 700 and 800; in the fifth row: σ 2 = 900 and 1000.

lumen, and the next lightest region that surrounds the core is the crescent moon class. Figure 9.19 shows the core class and the crescent moon class of the lumen with perturbation. The darkest region is the core lumen, and the next lightest region that surrounds the core is the crescent moon class. The variance (σ 2) was varied from 100 to 1000.

9.4.2 Lumen Detection and Quantification System

System pipeline is shown in Fig. 9.20. Step one consists of synthetic generation process discussed in section 9.4.1. This consists of synthesizing the two lumens corresponding to the left and right side of the neck. The grayscale image

Lumen Identification, Detection, and Quantification in MR Plaque Volumes

479

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

No. of Lumens

 

 

 

 

 

 

Image Size: Rows and Columns

 

 

 

 

 

 

Location of

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lumens

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

K

 

 

 

 

Gray Scale Image Generation Process

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gaussian

 

 

 

 

 

 

 

 

 

 

 

 

Mean, Var

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gray scale image with multiple lumens

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Classifier

 

 

 

Lumen Detection and

 

 

Region to Boundary

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

CCA

 

 

 

 

Quantification System (LDS)

 

 

 

 

 

Overlays

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Binarization

 

 

 

 

 

 

Ruler

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lumen Boundary Error & Overlays

Figure 9.20: Block diagram of the system. A gray scale image is generated, with parameters being the number of lumens, location of lumens, the number of classes K , and a Gaussian perturbation with mean and variance. The result is an image with multiple lumens with noise. This image is then processed by the lumen detection and quantification system (LDS). This system includes many steps including classification, binarization, connected components analysis (CCA), boundary detection, overlaying, and measuring the error. The final result are the lumen errors and overlays.

generation process takes in the noise parameters: the mean and variance, the locations of the lumens, the number of lumens and the class intensities of the lumen core, the crescent moon, and the background.

Step two consists of lumen detection and quantification system (LDAS) (see Fig. 9.20). The major block is the classification system discussed in section 9.3. Then comes the binarization unit which is used to convert the classified input into the binarized image and also does the region merging. It also has a connected component analysis (CCA) system block which is the input to the LDAS. We also need the region-to-boundary estimator which will give the boundary of the left and right lumens. Finally we have the quantification system (called Ruler), which is used to measure the boundary error.

The LDQS system consists of lumen detection and lumen quantification system. The lumen detection system (LDS) is shown in Fig. 9.21. The detection process is done by the classification system, while the identification is done by the CCA system. There are three classification system we have used in our

480

Suri et al.

 

Gray scale image with multiple lumens

Markovian

K (classes)

 

 

Classification Process

Fuzzy

Bayesian

Classified Image with multiple lumens and each lumen having multiple class regions

CCA

K

Region Merging & Different Lumen Identification Process

Left and Right Lumen Region Detected and Identified

Figure 9.21: Block diagram of lumen detection system (LDS). The gray scale image with multiple lumens is first classified by one of the classifiers. The result is a classified image with multiple lumens, with each lumen having multiple class regions. Within each lumen, these multiple regions are merged in the binarization process, given the number of classes K . They are labeled using connected component analysis (CCA). The LDS detects and labels each lumen.

processes (see section 9.3). The parameters used are the number of classes (K ) as shown in the Fig. 9.21. The CCA block also takes the parameter the number of classes, K , as input.

The lumen detection and identification is further detailed as shown in the Fig. 9.22. The detection system inputs the classified image and outputs the binary regions of the lumen. Because of boundary classes and plaque diffusion in the lumen area, there are classes well. We merge these classes to generate the complete lumen and the final detection of the lumen takes place as shown in the Fig. 9.22. Finally, the system shows the identification of the left and right lumen using the CCA analysis.

9.4.3 Region Merging for Lumen Detection

Figure 9.23 shows how the regions with multiple classes are merged. We will discuss the region merging strategy a little differently for the real data analysis,

Lumen Identification, Detection, and Quantification in MR Plaque Volumes

481

Classified Image with Multiple Classes Inside Lumen

K (classes)

Detection: Class Merging &

Binarization

ROI

Lumen Regions Detected (2 Lumens here)

CCA

Left and Right Lumen Identification

Using CCA

Left Lumen & Right Lumens Identified

Figure 9.22: Detection and identification of lumen. Input image is a classified image with multiple classes inside the lumens. Given the number of classes

K and the region of interest (ROI) of each region, the appropriate classes are merged and the image is binarized. The detected lumens are then identified using connected component analysis (CCA), and the left lumen and right lumen are identified.

due to the bifurcations in the arteries of the plaqued vessels (see sections 9.6.1 and 9.6.2). Figure 9.23 illustrates the region merging algorithm. The input image has lumens which have one, two, or more classes. If the number of classes in the ROI is one class, then that class is selected; if two classes are in the ROI, then the minimum class is selected; and if there are three or more classes in the ROI, then the minimum two classes are selected. The selected classes are merged by assigning all the pixels of the selected classes one level value. This process results in the binarization of the left and right lumens.

The binary region labeling process is shown in Fig. 9.24. The process uses the CCA approach of top to bottom and left to right. Input is an image in which the lumen regions are binarized. The CCA first labels the image from the top to the bottom, and then from the left to the right. The result is an image that is labeled from the left to the right.

ID assignment process of the CCA for each pixel is shown in Fig. 9.25. In the CCA, in the input binary image, each white pixel is assigned a unique ID. The label propagation process then results in connected components. The propagation of