Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Kluwer - Handbook of Biomedical Image Analysis Vol

.1.pdf
Скачиваний:
113
Добавлен:
10.08.2013
Размер:
10.58 Mб
Скачать

Improving the Initialization, Convergence, and Memory Utilization

395

(a)

(b)

(c)

(d)

Figure 7.14: (a) Result of steps (1)–(4) with grid 3 × 3 × 3. ( b) T-surfaces evolution (step 1). (c) Solution for initial grid. (d) Final solution for grid 1 × 1 × 1.

T-surfaces algorithm, the extracted geometry becomes closer to that of the target (Fig. 7.14(c)).

However, the topology remains different. The problem in this case is that the used grid resolution is too coarse if compared with the separation between branches of the structure. Thus, the flexibility of the model was not enough to correctly perform the surface reconstruction.

The solution is to increase the resolution and to take the partial result of Fig. 7.14(c) to initialize the model in the finer resolution. In this case, the correct result is obtained only with the finest grid (1 × 1 × 1). Figure 7.14(d) shows the desired result obtained after nine interactions. We also observe that new portions of the branches were reconstructed due to the increase of T-surfaces flexibility obtained through the finer grid. We should emphasize that an advantage of

396

Giraldi, Rodrigues, Marturelli, and Silva

(a)

(b)

Figure 7.15: (a) Example showing an incorrect result. ( b) Anisotropic diffusion in a preprocessing phase improving final result.

the multiresolution approach is that at the lower resolution, small background artifacts become less significant relative to the object(s) of interest. Besides, it avoids the computational cost of using a finer grid resolution to get closer to the target (see Section 7.4).

The T-surfaces parameters are γ = 0.01, k = 1.222, and c = 0.750. The total number of evolution is 13. The number of triangular elements is 10 104 for the highest resolution and the clock time was of the order of 3 min.

Sometimes, even the finest resolution may not be enough to get the correct result. Figure 7.15(a) pictures such an example.

In this case, we segment an artery from a 155 × 170 × 165 image volume obtained from the visible human project. The T-surfaces parameters are: c =

0.75, k = 1.12, γ = 0.3, grid resolution is 4 × 4 × 4, and freezing point is set to

10.The result of steps (1)–(6) is pictured in Fig. 7.15(a).

Among the proposals to address this problem (relax the threshold, mathemat-

ical morphology [59], etc.), we tested anisotropic diffusion [52]. The properties of this method (Section 7.8) enable smoothing within regions in preference to smoothing across boundaries. Figure 7.15( b) shows the correct result obtained when preprocessing the image with anisotropic diffusion and then applying steps

(1)–(6).

7.9.3 Out-of-Core Segmentation

In this section, we attempt to experimentally demonstrate our out-of-core segmentation technique. We consider three gray-level data sets and a 3D color image (Table 7.1).

Improving the Initialization, Convergence, and Memory Utilization

397

Table 7.1: Statistics for preprocessing: number of meta-cells (no. of MC), times for meta-cell generation (MC generation), gradient computation (gradient), interval tree construction (IT), size of each meta-cell (MC size), and size of the interval tree (IT size)

Data set

Artery

Artery2

Kidney

ColorV

 

 

 

 

 

Size (MB)

3.37

20.97

4.57

63.08

No. of MC

125

1000

7600

125

MC generation (sec)

3

25

5

58

Gradient (sec)

16

88

24

1740

IT (sec)

0.5

0.5

0.5

1.5

Total (sec)

20

114

30

1801

MC size (kB)

343.04

285.696

8.2944

2718.72

IT size (kB)

38.62

379.13

938.95

176.01

 

 

 

 

 

As we already said, T-surfaces uses auxiliary and very memory consuming data structures. We certainly can design optimizations. However, by now, we had to use a machine with enough memory to manage the T-surfaces structures. The machine used is Pentium III, 863 MHz with 512 MB of RAM and 768 MB of swap space.

There are three main steps to be considered: preprocessing, isosurfaces generation, and T-surfaces evolution. Preprocessing encompasses the gradient computation and meta-cell generation. Meta-cell generation is basically divided into two steps: (a) mapping data points into meta-cells and writing data information to the corresponding meta-cells; and ( b) finding meta-intervals and computing the interval tree. As can be seen in Table 7.1, preprocessing step can be expensive due to the gradient computation. Also, we observe from this table that the interval tree size (last row) is very much smaller than the bound computed in Section 7.7 (8 MB).

Isosurfaces generation encompasses steps (1) and (2) of the algorithm in Section 7.7. Table 7.2 reports some performance statistics for this step. In this case, we use a data cache of 15 MB.

It is important to observe that, in general, the smaller the meta-cell size, the faster the isosurface search. This fact is verified in Table 7.2 in which we vary the number of meta-cells used for the kidney data set. For instance, when using 7600 meta-cells, the algorithm can fetch all the sets of active meta-cells from disk. Thus, there are no extra I/O operations during step (1) of the segmentation

398

Giraldi, Rodrigues, Marturelli, and Silva

Table 7.2: Statistics for isosurface generation on the kidney data set. This table reports the number of meta-cells (no. of MC), number of active meta-cells (activeMC), interval tree (IT) information, and total time for isosurface generation (IsoTime). The data cache size used is 15 MB

No. of MC

7600

1000

288

125

ActiveMC

1140

474

256

125

IT size (kB)

938.95

203.56

61.23

21.25

IT time (sec)

1

1

1

1

IsoTime (sec)

13

15

15

20

 

 

 

 

 

algorithm. Also, the meta-cell technique minimizes the effect of the I/O bottleneck by reading from disk only those portions of the data necessary for step (1). Besides, the time for an interval tree query was approximately 1 sec (“IT time” in Table 7.2). As a consequence, if compared with the traditional implementation, we observe a performance improvement of 2 sec when using 7600 meta-cells.

The final step, the T-surfaces evolution, is globally reported in Table 7.3 for the kidney data set, maintaining the same partitions of Table 7.2. The quantity “no. of I/O” reported in this table counts the number of times that the algorithm reads a meta-cell from disk.

Again, the smaller the meta-cell size, the faster the whole process. Despite the high number of I/O operations reported in row 2 of Table 7.3, we must highlight that the total time for T-surfaces evolution without using the meta-cell was 623 sec, against 600 sec for the worst case reported in Table 7.3. For the best case, we observe a performance improvement of 120 sec, which is an important result. The final surface (Fig. 7.16(c)) has 34 624 triangular elements.

Table 7.3: T-surfaces in the kidney data set. This table reports the number of meta-cells (no. of MC), of number I/O operations (no. of I/O), number of meta-cells that have been cut (CutMC), and the total clock time for evolution (time). The data cache size is 15 MB and the number of interactions is 16

No. of MC

7600

1000

288

125

No. of I/O

1244

4780

1818

1458

CutMC

1074

325

125

70

Time (sec)

503

570

584

600

 

 

 

 

 

Improving the Initialization, Convergence, and Memory Utilization

399

(a)

(b)

(c)

Figure 7.16: Extracted surfaces for: (a) artery data set; ( b) artery2; and (c) kidney data set.

The number of I/O operations is a problem that we must address in future works. If we compare the “no. of I/O” with the number of meta-cells that the T-surfaces cuts during evolution (cutMC, in row 3), we observe that we should look for more efficient schemes for memory management.

The parameters used in the T-surfaces for the above experiments are: grid 4 × 4 × 4, freezing point = 10, γ = 0.01, k = 1.222, c = 0.750. The intensity pattern of the targets is given by the following ranges: [10, 22] for data set, [195, 255] for kidney, and [15, 30] for artery2. Figure 7.16 shows the extracted surfaces.

The data set artery2 is a gray-level version of a volume obtained from the Visible Human project. The ColorV data set, mentioned in Table 7.1, is the same volume, but in its original color (RGB). We apply our method for this volume, just using one threshold for each color channel [7] and using the color edge definition given in Section 7.8.

The Visual Human project encompasses a huge color data set of human body. For 125 meta-cells, we found R, G, and B interval trees with 64.75 kB, 65.67 kB and 45.59 kB, respectively, given the total size of 176.01 kB reported in Table 7.1. The preprocessing time is much higher now (29 min) due to the number of operations required to compute the gradient.

7.10 Discussion and Perspectives

When considering automatic initialization for deformable models, some aspects must be taken into account. The target topology may be corrupted due to inhomogeneities of the image field or gradient discontinuities. Besides, the obtained

400 Giraldi, Rodrigues, Marturelli, and Silva

curves/surfaces are in general not smooth, presenting defects such as protrusions, concavities, or even holes (for surfaces) due to image irregularities.

These problems can be addressed through an efficient presegmentation. For instance, when reconstructing the geometry of the human cerebral cortex, Prince et al. [76] used a fuzzy segmentation method (Adaptive Fuzzy C- Means) to obtain the following elements: a segmented field which provides a fuzzy membership function for each tissue class; the mean intensity of each class; and the inhomogeneity of the image, modeled as a smoothly varying gain field (see [76] and references therein). The result can be used to steer the isosurface extraction process as well as the deformable model, which is initialized by the obtained isosurface. We have used a similar approach as described in [33].

Moreover, the image forces may not be strong enough to push the model toward the object boundary. Even the balloon model in Eq. (7.9) cannot deal with such a problem because it is difficult to predict if the target is inside or outside the isosurface (see Fig. 7.6). So, it makes harder to accurately define the normal force field. The GVF (Section 7.8) can be used to generate an image force field that improves the convergence of the model toward the boundary. GVF is sensitive to noise and artifacts but we can achieve good results for presegmented images [77, 78].

Now, we will compare our segmentation approach (Section 7.5) to that proposed in [47]. In that reference, a set of small spherical T-snakes is uniformly distributed over the image. These curves progressively expand/merge to recover the geometry of interest. The same can be done for 3D.

Our approach can be considered an improvement of that one described in [47]. Our basic argument is that we should use the threshold to get seeds closer to the objects of interest. Thus, we avoid expanding time evolving surfaces far from the target geometry. Besides, we have observed an improvement in the performance of the segmentation process if compared with the traditional initialization of T-surfaces (an implicit defined surface inside the object) [49].

Our method is adaptive in the sense that we can increase the T-surfaces grid resolution where it is necessary. As the T-surfaces grid controls the density of the polygonal surfaces obtained, the number of triangular elements gets larger inside these regions. That increase in density is not due to boundary details but because the outer scale corresponding to the separation between the objects is too fine (as in Fig. 7.9). This is a disadvantage of our approach.

Improving the Initialization, Convergence, and Memory Utilization

401

Such a problem would be avoided if we could define significant areas along the surfaces and then apply the refinement only in the regions around them. However, it is difficult to automatically perform this task.

As a consequence, polygonal meshes generated by the T-surface method may not be efficient for some further applications. For instance, for finite element purposes, small triangles must be removed. Consequently, filtering mesh procedures must be applied in order to improve the surface. Mesh smoothing and denoising filtering methods, such as those proposed in [68], could also be useful in this postprocessing step.

We tested the precision of our approach when segmenting a sphere immersed on a uniform noise specified by the image intensity range [0, 150]. We found a mean error of 1.58 ( pixels) with standard deviation of 2.49 for a 5 × 5 × 5 grid resolution, which we consider acceptable in this case.

This error is due to the projection of T-surfaces as well as the image noise. Following [49, 50], when T-surfaces stops, we can discard the grid and evolve the model without it, avoiding errors due to the projections. However, for noisy images, the convergence of deformable models to the boundaries is poor due to the nonconvexity of the image energy [31].

Anisotropic diffusion applied to 3D images can improve the result, as already demonstrated in Sections 7.8 and 7.9.1. The gradient vector flow (see Section 7.8) can also be applied when the grid is turned off.

Now, let us consider the following question: Would it be possible to implement the reconstruction method through level sets? The relevance of it will be clear in what follows.

The initialization of the model through expression (7.20) is computationally expensive and not efficient if we have more than one front to initialize [75].

The narrow-band technique is much more appropriate for this case. The key idea of this technique comes from the observation that the front can be moved by updating the level set function at a small set of points in the neighborhood of the zero set instead of updating it at all the points in the domain (see [46, 61] for details).

To implement this scheme, we need to pre-set a distance d to define the narrow band. The front can move inside the narrow band until it collides with the narrow-band frontiers. Then, the function G should be reinitialized by treating the current zero set configuration as the initial one.

402

Giraldi, Rodrigues, Marturelli, and Silva

Also, this method can be made cheaper by observing that the grid points that do not belong to the narrow band can be treated as sign holders [46], following the same idea of the characteristic function of T-surfaces (Section 7.2.3). Thus, the result of steps (1)–(5) in Section 7.5 can be used to initialize the level sets model if the narrow-band extension technique is applied.

The proposed out-of-core method for segmentation must be analyzed against usual procedures to deal with memory limitations when segmenting a 3D image.

General-purpose methods, such as streaming of Visualization Toolkit and virtual memory of operating systems, have demonstrated less efficiency for scientific visualization applications [24] than the meta-cell technique. The results in Section 7.9 show that the same happens for 3D image segmentation.

Among the special-purpose methods, the simplest strategy would be to subdivide the region of interest in a set of subregions and then segment the structure in each one at a time. Besides being a very tedious procedure, some additional work must be done to put the extracted components together in order to complete the reconstruction.

Another possibility would be to segment 2D slices, extract the corresponding contours, and then reconstruct the surface through the obtained curves. This is particularly difficult for artery segmentation, a case of interest, due to their tree structures and branching characteristics.

On the other hand, having once segmented slice by slice, each 2D image could be binarized (1 inside the extracted contours and 0 otherwise). The obtained binary field could fit in the main memory and then the reconstruction performed through an isosurface extraction method. However, the obtained polygon mesh may not be smooth. The application of mesh smoothing procedures [68] may not be efficient if the data set information is not taken into account. But, if it does not fit into the computer memory, we return to the original problem.

The preprocessing step is very simple for the meta-cell technique applied to image volumes because the data set is regular. The algorithm presented in graphics.cs.ucdavis.edu/research/Slicer.html has a longer preprocessing step. New experiments must be performed to compare both approaches.

Potentially, the most common deformable surface models (energyminimizing, dynamic deformable surfaces and balloons [48]) can be made out- of-core by using the meta-cell technique. Basically, it can be performed by maintaining the traditional algorithms, by choosing explicit methods to solve the

Improving the Initialization, Convergence, and Memory Utilization

403

evolution equations (e.g., expression (7.14)), and using the processing list to

guarantee locality during evolution.

Other interesting perspectives for our work are out-of-core implementations of other techniques such as region growing (for segmentation) and level sets (for surface reconstruction).

To show this, let us consider a simple region growing algorithm, which takes a seed point p, and find out the connected set: {q Image; |I(q) −

I( p)| ≤ }. At run-time, we traverse the interval tree and find the active metacells. Then, we fill the data cache and perform usual region growing operations [40], but calling insert neighbors for each point p incorporated to the

region.

Besides, level sets can be made out-of-core by using the narrow-band technique described above. In this case, it is just a matter of observing that the level sets algorithm would only need the image information inside the narrow band. Henceforth, an out-of-core implementation can be provided.

7.11 Conclusions

Deformable models offer an attractive approach for geometry recovery and tracking because these models are able to represent complex and broad shapes variability, particularly in the context of medical imaging.

Despite their capabilities, traditional deformable models suffer from the strong sensitivity to the initial contour position and topological limitations.

Among the possibilities to address these problems, we follow the research line that uses a two-step approach: Firstly, a rough approximation of the boundary is taken. Secondly, the obtained geometry is improved by a topologically adaptable deformable model. The reconstruction method presented in Section 7.5 is a result of our research in this direction.

We have used the T-surfaces model but it is pointed out that level sets could also be used. When T-surfaces stops, we can discard the grid and evolve the model without it to avoid errors due to the projections. Now, GVF can be useful to improve the convergence toward the boundary.

Also, when using deformable surfaces, memory limitations can lower the performance of segmentation applications for large 3D images. Few works have been done to address this problem.

404

Giraldi, Rodrigues, Marturelli, and Silva

We show that the meta-cell technique is the most suitable data structure to perform out-of-core implementations of segmentation methods. Next, we take advantage of the meta-cell method to present an out-of-core implementation of the segmentation approach presented in Section 7.5.

The experimental results presented in Section 7.9 demonstrate the potential of the segmentation approach in Section 7.5 when augmented with diffusion and out-of-core techniques. This is emphasized with the discussion and perspectives in Section 7.10.

Questions

1.The static formulation of the original snake model is given by the minimization of the energy functional

E : Ad → !,

E (c) = E1(c (s)) + E2(c(s)) ,

defined in Section 7.2.1. Supposing that c C4, show that the Euler– Lagrange equations become:

w1c (s) + w2c (s) + P(c(s)) = 0.

2.Discuss the effect of the parameters w1 and w2 over the original snake model in exercise 1 by using the following equations for a curve c:

 

dc

T ,

d2c

=

K N ,

 

 

= −→

2

−→

 

 

 

 

 

 

T and N are the

where α is the arc length, K is the curvature, and −→

−→

unitary tangent and normal vectors, respectively.

3. Show that the original snake model is not invariant under affine trans-

formations given by the general form:

 

 

 

 

 

u

 

=

 

a11

a12

b1

·

x

 

1

0

0

1

1

v

 

 

a21

a22

b2

y

.

 

 

 

 

 

 

 

 

 

 

 

4. Discuss the role of the characteristic function for the T-surfaces model.