- •Biological and Medical Physics, Biomedical Engineering
- •Medical Image Processing
- •Preface
- •Contents
- •Contributors
- •1.1 Medical Image Processing
- •1.2 Techniques
- •1.3 Applications
- •1.4 The Contribution of This Book
- •References
- •2.1 Introduction
- •2.2 MATLAB and DIPimage
- •2.2.1 The Basics
- •2.2.2 Interactive Examination of an Image
- •2.2.3 Filtering and Measuring
- •2.2.4 Scripting
- •2.3 Cervical Cancer and the Pap Smear
- •2.4 An Interactive, Partial History of Automated Cervical Cytology
- •2.5 The Future of Automated Cytology
- •2.6 Conclusions
- •References
- •3.1 The Need for Seed-Driven Segmentation
- •3.1.1 Image Analysis and Computer Vision
- •3.1.2 Objects Are Semantically Consistent
- •3.1.3 A Separation of Powers
- •3.1.4 Desirable Properties of Seeded Segmentation Methods
- •3.2 A Review of Segmentation Techniques
- •3.2.1 Pixel Selection
- •3.2.2 Contour Tracking
- •3.2.3 Statistical Methods
- •3.2.4 Continuous Optimization Methods
- •3.2.4.1 Active Contours
- •3.2.4.2 Level Sets
- •3.2.4.3 Geodesic Active Contours
- •3.2.5 Graph-Based Methods
- •3.2.5.1 Graph Cuts
- •3.2.5.2 Random Walkers
- •3.2.5.3 Watershed
- •3.2.6 Generic Models for Segmentation
- •3.2.6.1 Continuous Models
- •3.2.6.2 Hierarchical Models
- •3.2.6.3 Combinations
- •3.3 A Unifying Framework for Discrete Seeded Segmentation
- •3.3.1 Discrete Optimization
- •3.3.2 A Unifying Framework
- •3.3.3 Power Watershed
- •3.4 Globally Optimum Continuous Segmentation Methods
- •3.4.1 Dealing with Noise and Artifacts
- •3.4.2 Globally Optimal Geodesic Active Contour
- •3.4.3 Maximal Continuous Flows and Total Variation
- •3.5 Comparison and Discussion
- •3.6 Conclusion and Future Work
- •References
- •4.1 Introduction
- •4.2 Deformable Models
- •4.2.1 Point-Based Snake
- •4.2.1.1 User Constraint Energy
- •4.2.1.2 Snake Optimization Method
- •4.2.2 Parametric Deformable Models
- •4.2.3 Geometric Deformable Models (Active Contours)
- •4.2.3.1 Curve Evolution
- •4.2.3.2 Level Set Concept
- •4.2.3.3 Geodesic Active Contour
- •4.2.3.4 Chan–Vese Deformable Model
- •4.3 Comparison of Deformable Models
- •4.4 Applications
- •4.4.1 Bone Surface Extraction from Ultrasound
- •4.4.2 Spinal Cord Segmentation
- •4.4.2.1 Spinal Cord Measurements
- •4.4.2.2 Segmentation Using Geodesic Active Contour
- •4.5 Conclusion
- •References
- •5.1 Introduction
- •5.2 Imaging Body Fat
- •5.3 Image Artifacts and Their Impact on Segmentation
- •5.3.1 Partial Volume Effect
- •5.3.2 Intensity Inhomogeneities
- •5.4 Overview of Segmentation Techniques Used to Isolate Fat
- •5.4.1 Thresholding
- •5.4.2 Selecting the Optimum Threshold
- •5.4.3 Gaussian Mixture Model
- •5.4.4 Region Growing
- •5.4.5 Adaptive Thresholding
- •5.4.6 Segmentation Using Overlapping Mosaics
- •5.6 Conclusions
- •References
- •6.1 Introduction
- •6.2 Clinical Context
- •6.3 Vessel Segmentation
- •6.3.1 Survey of Vessel Segmentation Methods
- •6.3.1.1 General Overview
- •6.3.1.2 Region-Growing Methods
- •6.3.1.3 Differential Analysis
- •6.3.1.4 Model-Based Filtering
- •6.3.1.5 Deformable Models
- •6.3.1.6 Statistical Approaches
- •6.3.1.7 Path Finding
- •6.3.1.8 Tracking Methods
- •6.3.1.9 Mathematical Morphology Methods
- •6.3.1.10 Hybrid Methods
- •6.4 Vessel Modeling
- •6.4.1 Motivation
- •6.4.1.1 Context
- •6.4.1.2 Usefulness
- •6.4.2 Deterministic Atlases
- •6.4.2.1 Pioneering Works
- •6.4.2.2 Graph-Based and Geometric Atlases
- •6.4.3 Statistical Atlases
- •6.4.3.1 Anatomical Variability Handling
- •6.4.3.2 Recent Works
- •References
- •7.1 Introduction
- •7.2 Linear Structure Detection Methods
- •7.3.1 CCM for Imaging Diabetic Peripheral Neuropathy
- •7.3.2 CCM Image Characteristics and Noise Artifacts
- •7.4.1 Foreground and Background Adaptive Models
- •7.4.2 Local Orientation and Parameter Estimation
- •7.4.3 Separation of Nerve Fiber and Background Responses
- •7.4.4 Postprocessing the Enhanced-Contrast Image
- •7.5 Quantitative Analysis and Evaluation of Linear Structure Detection Methods
- •7.5.1 Methodology of Evaluation
- •7.5.2 Database and Experiment Setup
- •7.5.3 Nerve Fiber Detection Comparison Results
- •7.5.4 Evaluation of Clinical Utility
- •7.6 Conclusion
- •References
- •8.1 Introduction
- •8.2 Methods
- •8.2.1 Linear Feature Detection by MDNMS
- •8.2.2 Check Intensities Within 1D Window
- •8.2.3 Finding Features Next to Each Other
- •8.2.4 Gap Linking for Linear Features
- •8.2.5 Quantifying Branching Structures
- •8.3 Linear Feature Detection on GPUs
- •8.3.1 Overview of GPUs and Execution Models
- •8.3.2 Linear Feature Detection Performance Analysis
- •8.3.3 Parallel MDNMS on GPUs
- •8.3.5 Results for GPU Linear Feature Detection
- •8.4.1 Architecture and Implementation
- •8.4.2 HCA-Vision Features
- •8.4.3 Linear Feature Detection and Analysis Results
- •8.5 Selected Applications
- •8.5.1 Neurite Tracing for Drug Discovery and Functional Genomics
- •8.5.2 Using Linear Features to Quantify Astrocyte Morphology
- •8.5.3 Separating Adjacent Bacteria Under Phase Contrast Microscopy
- •8.6 Perspectives and Conclusions
- •References
- •9.1 Introduction
- •9.2 Bone Imaging Modalities
- •9.2.1 X-Ray Projection Imaging
- •9.2.2 Computed Tomography
- •9.2.3 Magnetic Resonance Imaging
- •9.2.4 Ultrasound Imaging
- •9.3 Quantifying the Microarchitecture of Trabecular Bone
- •9.3.1 Bone Morphometric Quantities
- •9.3.2 Texture Analysis
- •9.3.3 Frequency-Domain Methods
- •9.3.4 Use of Fractal Dimension Estimators for Texture Analysis
- •9.3.4.1 Frequency-Domain Estimation of the Fractal Dimension
- •9.3.4.2 Lacunarity
- •9.3.4.3 Lacunarity Parameters
- •9.3.5 Computer Modeling of Biomechanical Properties
- •9.4 Trends in Imaging of Bone
- •References
- •10.1 Introduction
- •10.1.1 Adolescent Idiopathic Scoliosis
- •10.2 Imaging Modalities Used for Spinal Deformity Assessment
- •10.2.1 Current Clinical Practice: The Cobb Angle
- •10.2.2 An Alternative: The Ferguson Angle
- •10.3 Image Processing Methods
- •10.3.1 Previous Studies
- •10.3.2 Discrete and Continuum Functions for Spinal Curvature
- •10.3.3 Tortuosity
- •10.4 Assessment of Image Processing Methods
- •10.4.1 Patient Dataset and Image Processing
- •10.4.2 Results and Discussion
- •10.5 Summary
- •References
- •11.1 Introduction
- •11.2 Retinal Imaging
- •11.2.1 Features of a Retinal Image
- •11.2.2 The Reason for Automated Retinal Analysis
- •11.2.3 Acquisition of Retinal Images
- •11.3 Preprocessing of Retinal Images
- •11.4 Lesion Based Detection
- •11.4.1 Matched Filtering for Blood Vessel Segmentation
- •11.4.2 Morphological Operators in Retinal Imaging
- •11.5 Global Analysis of Retinal Vessel Patterns
- •11.6 Conclusion
- •References
- •12.1 Introduction
- •12.1.1 The Progression of Diabetic Retinopathy
- •12.2 Automated Detection of Diabetic Retinopathy
- •12.2.1 Automated Detection of Microaneurysms
- •12.3 Image Databases
- •12.4 Tortuosity
- •12.4.1 Tortuosity Metrics
- •12.5 Tracing Retinal Vessels
- •12.5.1 NeuronJ
- •12.5.2 Other Software Packages
- •12.6 Experimental Results and Discussion
- •12.7 Summary and Future Work
- •References
- •13.1 Introduction
- •13.2 Volumetric Image Visualization Methods
- •13.2.1 Multiplanar Reformation (2D slicing)
- •13.2.2 Surface-Based Rendering
- •13.2.3 Volumetric Rendering
- •13.3 Volume Rendering Principles
- •13.3.1 Optical Models
- •13.3.2 Color and Opacity Mapping
- •13.3.2.2 Transfer Function
- •13.3.3 Composition
- •13.3.4 Volume Illumination and Illustration
- •13.4 Software-Based Raycasting
- •13.4.1 Applications and Improvements
- •13.5 Splatting Algorithms
- •13.5.1 Performance Analysis
- •13.5.2 Applications and Improvements
- •13.6 Shell Rendering
- •13.6.1 Application and Improvements
- •13.7 Texture Mapping
- •13.7.1 Performance Analysis
- •13.7.2 Applications
- •13.7.3 Improvements
- •13.7.3.1 Shading Inclusion
- •13.7.3.2 Empty Space Skipping
- •13.8 Discussion and Outlook
- •References
- •14.1 Introduction
- •14.1.1 Magnetic Resonance Imaging
- •14.1.2 Compressed Sensing
- •14.1.3 The Role of Prior Knowledge
- •14.2 Sparsity in MRI Images
- •14.2.1 Characteristics of MR Images (Prior Knowledge)
- •14.2.2 Choice of Transform
- •14.2.3 Use of Data Ordering
- •14.3 Theory of Compressed Sensing
- •14.3.1 Data Acquisition
- •14.3.2 Signal Recovery
- •14.4 Progress in Sparse Sampling for MRI
- •14.4.1 Review of Results from the Literature
- •14.4.2 Results from Our Work
- •14.4.2.1 PECS
- •14.4.2.2 SENSECS
- •14.4.2.3 PECS Applied to CE-MRA
- •14.5 Prospects for Future Developments
- •References
- •15.1 Introduction
- •15.2 Acquisition of DT Images
- •15.2.1 Fundamentals of DTI
- •15.2.2 The Pulsed Field Gradient Spin Echo (PFGSE) Method
- •15.2.3 Diffusion Imaging Sequences
- •15.2.4 Example: Anisotropic Diffusion of Water in the Eye Lens
- •15.2.5 Data Acquisition
- •15.3 Digital Processing of DT Images
- •15.3.2 Diagonalization of the DT
- •15.3.3 Gradient Calibration Factors
- •15.3.4 Sorting Bias
- •15.3.5 Fractional Anisotropy
- •15.3.6 Other Anisotropy Metrics
- •15.4 Applications of DTI to Articular Cartilage
- •15.4.1 Bovine AC
- •15.4.2 Human AC
- •References
- •Index
34 |
C. Couprie et al. |
weighted finite sums with as many terms as there are different kinds of cliques. In [26] Geman and Geman proposed to optimize these sums using Gibbs sampling (a form of Monte-Carlo Markov Chain algorithm) and simulated annealing. This was first used for image restoration, but can be readily applied to segmentation as well. This approach was very successful because it is very flexible. Markers and texture terms can be added in, and many algorithmic improvement were proposed over the years. However, it remains a relatively costly and slow approach. Even though Geman and Geman showed that their simulated annealing strategy converges, it only does so under conditions that make the algorithm extremely slow, and so usually only a non-converged or approximate result is used. More recently, it was realized that Graph-Cut (GC) methods were well-suited to optimized some MRF energies very efficiently. We will give more details in the corresponding section.
MRFs belong to the larger class of Bayesian methods. Information-theoretic perspectives and formulations, such as following the Minimum Description Length principle, also exist. These frameworks are also very flexible, allowing for example region competition [69]. However, the corresponding models might be complicated both to understand and run, and sometimes possess many parameters that are not obvious to tune. Well-designed methods are guaranteed to converge to at least a local minimum.
In general, when dealing with regions that have complex content (for instance, textures, or multispectral content), statistical methods can be a very good choice although they cannot be recommended for general work, since simpler and faster methods often are sufficient.
3.2.4 Continuous Optimization Methods
In the late 1980s, it was realized that contour tracking methods were too limited for practical use. Indeed, getting closed contours around objects were difficult to obtain with contour tracking. This meant that detecting actual objects was difficult except in the simplest cases.
3.2.4.1 Active Contours
Researchers, therefore, proposed to start from already-closed loops, and to make them evolve in such a way that they would converge towards the true contours of the image. Thus were introduced active contours, or snakes [39]. The formulation of snakes takes the following continuous-domain shape:
|
|
1 |
|
Esnake = |
0 |
{Einternal(v(s)) + Edata(v(s)) + Econstraints(v(s))}ds. |
(3.1) |
where v(s) is a parametric representation of the contour.
3 Seeded Segmentation Methods for Medical Image Analysis |
35 |
This model is very flexible. It contains internal terms, image data terms and constraints terms (see Chapter 4 for more details):
•The first term, the internal energy, contains a curvature term and a “rubber band” energy. The former tends to smooth the resulting contour following a thin plate, while the latter tends to make it shrink around features of interest. Other terms such as kinetic energy can be added too, which makes it possible for the snake to avoid noisy zones and flat areas.
•The second term, the data energy, attracts the active contours towards points of interest in the image: typically, image contours (zones of high gradient), lines or termination points.
•The last term, the constraint term, is optional, but allows interaction with the snake by defining zones of attraction and repulsion.
To solve this equation, the Euler–Lagrange of (3.1) is worked out (typically in closed form), and a gradient descent algorithm is used. All the terms are combined in a linear combination, allowing them to be balanced according to the needs of the user. Due to its flexibility, the active contour model was very popular in the literature as well as in applications. It fits very well into the interactive segmentation paradigm because constraints can be added very easily, and it can be quite fast because it uses a so-called Lagrangian framework. The contour itself is discretized at regular interval points and evolves according to (3.1). Convergence towards a local minimum of the energy is guaranteed, but may require many iterations.
In practice, there are some difficulties: the snake energy is flexible but difficult to tune. Because of the contour evolution, points along the contour tend to spread out or bunch up, requiring regular and frequent resampling. There can also be topological difficulties, for instance causing the snake to self-intersect. The snake is also sensitive to its parametrization and to initialization. Finally, even though a local optimum is guaranteed, in practice, it may not be of good quality due to noise sensitivity.
One major difficulty with snakes is that they can be extended to 3D via triangulation, but such extensions can be complicated, and topological problems plaguing snakes in 2D are usually more difficult to avoid in 3D. However, 3D active surfaces are still widely used, because they make it easy to improve or regularize a triangulated surface obtained by other means. For instance, the brain segmentation software FreeSurfer includes such a method. To distinguish them from other models we are going to introduce now, snake-like active contours or surfaces are sometimes called parametric deformable models.
3.2.4.2 Level Sets
One way to avoid altogether some of the problems brought about by the way parametric deformable models are discretized, is to embed the contour into a higherdimensional manifold. This idea gave rise to level sets, proposed by Osher and Sethian in 1988 [53]. Remarkably, this is around the same time when active contours
36 |
C. Couprie et al. |
a |
b |
Fig. 3.1 Embedding and evolving a curve as a level set of a higher-dimension function. The zerolevel of function ψ is shown in color, representing a 2D contour. To evolve the contour, the whole function evolves. Note that topology changes can occur in the contour, while the embedding surface shows no such effect
were proposed. However level sets were initially proposed for computational fluid dynamics and numerical simulations. They were applied to imaging somewhat later [43, 62]. A contour is represented on the surface S of an evolving regular function ψ by its zero level-set, which is simply the threshold of the function ψ at zero. By using sufficiently regular embedding functions ψ , namely signed distance transforms from an initial contour, it was possible to propose effective evolution equations to solve similar problems to Lagrangian active contours.
The main advantages of the level-sets method were that contour resampling was no longer necessary, and contour self-intersection (shock solutions) was avoided because level sets were able to change topology easily (see Fig. 3.1b). This means practically that it was possible at least in theory to initialize a segmentation by drawing a box around a series of object of interest, and the level set could find a contour around each of them. This was seen as a major benefit by the vision community. The level set Eulerian formulation (where the whole space is discretized) is thought to offer better theoretical guarantees than the Lagrangian framework of previous non-embedded formulations, and the simulation of function evolution is a wellresearched topic with many usable and interesting results. Finally, the formulation is dimension independent. Level sets work virtually unchanged in 3D or more, which is a major benefit.
There are also a number of drawbacks. First, the level set formulation is more expensive than earlier active contour formulations. It requires the iterative solving of PDEs in the whole space, which is expensive. In practice, it is possible to limit the computation in a narrow band around the contour, but this is still more costly than if they were limited to the contour itself, and requires the resampling that was sought to be avoided. The surface S of function ψ is implicitly represented by the function itself, but it requires more space than the contour. In 3D or more,
3 Seeded Segmentation Methods for Medical Image Analysis |
37 |
this may be prohibitive. Some contour motions are not representable (e.g. contour rotation), but this is a minor problem. More importantly, the fact that level-sets can undergo topology changes is actually a problem in image analysis, where it is useful to know that a contour initialized somewhere will converge to a single simple closed contour. In some cases, a contour can split or even disappear completely, leading to undesirable results.
Nonetheless, level-set formulations are even more flexible than active contours, and very complex energies solving equally complex problems have been proposed in the literature. Solving problem involving texture, motion, competing surfaces and so on is relatively easy to formulate in this context [55, 56]. For this reason, they were and remain popular. Complex level-set formulation tend to be sensitive to noise and can converge to a poor locally optimal solution. On the other hand, more robust, closer to convex solutions can now be solved via other means. An example of relatively simple PDE that can be solved by level sets is the following:
ψt + F| ψt | = 0, |
(3.2) |
where F is the so-called speed function. Malladi and Sethian proposed the following for F:
F = |
1 − ε κ |
+ |
β |
( . I ). |
(3.3) |
|
1 + | I| |
||||||
|
|
ψ | | |
|
The first part of the equation is a term driving the embedding function ψ towards contours of the image with some regularity and smoothing controlled by the curvature κ . The amount of smoothing is controlled by the parameter ε . The second term is a “balloon” force that tends to expand the contour. It is expected that the contour initially be placed inside the object of interest, and that this balloon force should be reduced or eliminated after some iterations, controlled by the parameter β . We see here that even though this model is relatively simple for a level-set one, it already has a few parameters that are not obvious to set or optimize.
3.2.4.3 Geodesic Active Contours
An interesting attempt to solve some of the problems posed by overly general level sets was to go back and simplify the problem, arguing for consistency and a geometric interpretation of the contour obtained. The result was the geodesic active contour (GAC), proposed by Caselles et al. in 1997 [10]. The level set formulation is the following:
ψt = | ψ |div g(I) |
ψ |
(3.4) |
| ψ | . |
This equation is virtually parameter-free, with only a g function required. This function is a metric and has a simple interpretation: it defines at point x the cost of a contour going through x. This metric is expected to be positive definite, and in
