- •Biological and Medical Physics, Biomedical Engineering
- •Medical Image Processing
- •Preface
- •Contents
- •Contributors
- •1.1 Medical Image Processing
- •1.2 Techniques
- •1.3 Applications
- •1.4 The Contribution of This Book
- •References
- •2.1 Introduction
- •2.2 MATLAB and DIPimage
- •2.2.1 The Basics
- •2.2.2 Interactive Examination of an Image
- •2.2.3 Filtering and Measuring
- •2.2.4 Scripting
- •2.3 Cervical Cancer and the Pap Smear
- •2.4 An Interactive, Partial History of Automated Cervical Cytology
- •2.5 The Future of Automated Cytology
- •2.6 Conclusions
- •References
- •3.1 The Need for Seed-Driven Segmentation
- •3.1.1 Image Analysis and Computer Vision
- •3.1.2 Objects Are Semantically Consistent
- •3.1.3 A Separation of Powers
- •3.1.4 Desirable Properties of Seeded Segmentation Methods
- •3.2 A Review of Segmentation Techniques
- •3.2.1 Pixel Selection
- •3.2.2 Contour Tracking
- •3.2.3 Statistical Methods
- •3.2.4 Continuous Optimization Methods
- •3.2.4.1 Active Contours
- •3.2.4.2 Level Sets
- •3.2.4.3 Geodesic Active Contours
- •3.2.5 Graph-Based Methods
- •3.2.5.1 Graph Cuts
- •3.2.5.2 Random Walkers
- •3.2.5.3 Watershed
- •3.2.6 Generic Models for Segmentation
- •3.2.6.1 Continuous Models
- •3.2.6.2 Hierarchical Models
- •3.2.6.3 Combinations
- •3.3 A Unifying Framework for Discrete Seeded Segmentation
- •3.3.1 Discrete Optimization
- •3.3.2 A Unifying Framework
- •3.3.3 Power Watershed
- •3.4 Globally Optimum Continuous Segmentation Methods
- •3.4.1 Dealing with Noise and Artifacts
- •3.4.2 Globally Optimal Geodesic Active Contour
- •3.4.3 Maximal Continuous Flows and Total Variation
- •3.5 Comparison and Discussion
- •3.6 Conclusion and Future Work
- •References
- •4.1 Introduction
- •4.2 Deformable Models
- •4.2.1 Point-Based Snake
- •4.2.1.1 User Constraint Energy
- •4.2.1.2 Snake Optimization Method
- •4.2.2 Parametric Deformable Models
- •4.2.3 Geometric Deformable Models (Active Contours)
- •4.2.3.1 Curve Evolution
- •4.2.3.2 Level Set Concept
- •4.2.3.3 Geodesic Active Contour
- •4.2.3.4 Chan–Vese Deformable Model
- •4.3 Comparison of Deformable Models
- •4.4 Applications
- •4.4.1 Bone Surface Extraction from Ultrasound
- •4.4.2 Spinal Cord Segmentation
- •4.4.2.1 Spinal Cord Measurements
- •4.4.2.2 Segmentation Using Geodesic Active Contour
- •4.5 Conclusion
- •References
- •5.1 Introduction
- •5.2 Imaging Body Fat
- •5.3 Image Artifacts and Their Impact on Segmentation
- •5.3.1 Partial Volume Effect
- •5.3.2 Intensity Inhomogeneities
- •5.4 Overview of Segmentation Techniques Used to Isolate Fat
- •5.4.1 Thresholding
- •5.4.2 Selecting the Optimum Threshold
- •5.4.3 Gaussian Mixture Model
- •5.4.4 Region Growing
- •5.4.5 Adaptive Thresholding
- •5.4.6 Segmentation Using Overlapping Mosaics
- •5.6 Conclusions
- •References
- •6.1 Introduction
- •6.2 Clinical Context
- •6.3 Vessel Segmentation
- •6.3.1 Survey of Vessel Segmentation Methods
- •6.3.1.1 General Overview
- •6.3.1.2 Region-Growing Methods
- •6.3.1.3 Differential Analysis
- •6.3.1.4 Model-Based Filtering
- •6.3.1.5 Deformable Models
- •6.3.1.6 Statistical Approaches
- •6.3.1.7 Path Finding
- •6.3.1.8 Tracking Methods
- •6.3.1.9 Mathematical Morphology Methods
- •6.3.1.10 Hybrid Methods
- •6.4 Vessel Modeling
- •6.4.1 Motivation
- •6.4.1.1 Context
- •6.4.1.2 Usefulness
- •6.4.2 Deterministic Atlases
- •6.4.2.1 Pioneering Works
- •6.4.2.2 Graph-Based and Geometric Atlases
- •6.4.3 Statistical Atlases
- •6.4.3.1 Anatomical Variability Handling
- •6.4.3.2 Recent Works
- •References
- •7.1 Introduction
- •7.2 Linear Structure Detection Methods
- •7.3.1 CCM for Imaging Diabetic Peripheral Neuropathy
- •7.3.2 CCM Image Characteristics and Noise Artifacts
- •7.4.1 Foreground and Background Adaptive Models
- •7.4.2 Local Orientation and Parameter Estimation
- •7.4.3 Separation of Nerve Fiber and Background Responses
- •7.4.4 Postprocessing the Enhanced-Contrast Image
- •7.5 Quantitative Analysis and Evaluation of Linear Structure Detection Methods
- •7.5.1 Methodology of Evaluation
- •7.5.2 Database and Experiment Setup
- •7.5.3 Nerve Fiber Detection Comparison Results
- •7.5.4 Evaluation of Clinical Utility
- •7.6 Conclusion
- •References
- •8.1 Introduction
- •8.2 Methods
- •8.2.1 Linear Feature Detection by MDNMS
- •8.2.2 Check Intensities Within 1D Window
- •8.2.3 Finding Features Next to Each Other
- •8.2.4 Gap Linking for Linear Features
- •8.2.5 Quantifying Branching Structures
- •8.3 Linear Feature Detection on GPUs
- •8.3.1 Overview of GPUs and Execution Models
- •8.3.2 Linear Feature Detection Performance Analysis
- •8.3.3 Parallel MDNMS on GPUs
- •8.3.5 Results for GPU Linear Feature Detection
- •8.4.1 Architecture and Implementation
- •8.4.2 HCA-Vision Features
- •8.4.3 Linear Feature Detection and Analysis Results
- •8.5 Selected Applications
- •8.5.1 Neurite Tracing for Drug Discovery and Functional Genomics
- •8.5.2 Using Linear Features to Quantify Astrocyte Morphology
- •8.5.3 Separating Adjacent Bacteria Under Phase Contrast Microscopy
- •8.6 Perspectives and Conclusions
- •References
- •9.1 Introduction
- •9.2 Bone Imaging Modalities
- •9.2.1 X-Ray Projection Imaging
- •9.2.2 Computed Tomography
- •9.2.3 Magnetic Resonance Imaging
- •9.2.4 Ultrasound Imaging
- •9.3 Quantifying the Microarchitecture of Trabecular Bone
- •9.3.1 Bone Morphometric Quantities
- •9.3.2 Texture Analysis
- •9.3.3 Frequency-Domain Methods
- •9.3.4 Use of Fractal Dimension Estimators for Texture Analysis
- •9.3.4.1 Frequency-Domain Estimation of the Fractal Dimension
- •9.3.4.2 Lacunarity
- •9.3.4.3 Lacunarity Parameters
- •9.3.5 Computer Modeling of Biomechanical Properties
- •9.4 Trends in Imaging of Bone
- •References
- •10.1 Introduction
- •10.1.1 Adolescent Idiopathic Scoliosis
- •10.2 Imaging Modalities Used for Spinal Deformity Assessment
- •10.2.1 Current Clinical Practice: The Cobb Angle
- •10.2.2 An Alternative: The Ferguson Angle
- •10.3 Image Processing Methods
- •10.3.1 Previous Studies
- •10.3.2 Discrete and Continuum Functions for Spinal Curvature
- •10.3.3 Tortuosity
- •10.4 Assessment of Image Processing Methods
- •10.4.1 Patient Dataset and Image Processing
- •10.4.2 Results and Discussion
- •10.5 Summary
- •References
- •11.1 Introduction
- •11.2 Retinal Imaging
- •11.2.1 Features of a Retinal Image
- •11.2.2 The Reason for Automated Retinal Analysis
- •11.2.3 Acquisition of Retinal Images
- •11.3 Preprocessing of Retinal Images
- •11.4 Lesion Based Detection
- •11.4.1 Matched Filtering for Blood Vessel Segmentation
- •11.4.2 Morphological Operators in Retinal Imaging
- •11.5 Global Analysis of Retinal Vessel Patterns
- •11.6 Conclusion
- •References
- •12.1 Introduction
- •12.1.1 The Progression of Diabetic Retinopathy
- •12.2 Automated Detection of Diabetic Retinopathy
- •12.2.1 Automated Detection of Microaneurysms
- •12.3 Image Databases
- •12.4 Tortuosity
- •12.4.1 Tortuosity Metrics
- •12.5 Tracing Retinal Vessels
- •12.5.1 NeuronJ
- •12.5.2 Other Software Packages
- •12.6 Experimental Results and Discussion
- •12.7 Summary and Future Work
- •References
- •13.1 Introduction
- •13.2 Volumetric Image Visualization Methods
- •13.2.1 Multiplanar Reformation (2D slicing)
- •13.2.2 Surface-Based Rendering
- •13.2.3 Volumetric Rendering
- •13.3 Volume Rendering Principles
- •13.3.1 Optical Models
- •13.3.2 Color and Opacity Mapping
- •13.3.2.2 Transfer Function
- •13.3.3 Composition
- •13.3.4 Volume Illumination and Illustration
- •13.4 Software-Based Raycasting
- •13.4.1 Applications and Improvements
- •13.5 Splatting Algorithms
- •13.5.1 Performance Analysis
- •13.5.2 Applications and Improvements
- •13.6 Shell Rendering
- •13.6.1 Application and Improvements
- •13.7 Texture Mapping
- •13.7.1 Performance Analysis
- •13.7.2 Applications
- •13.7.3 Improvements
- •13.7.3.1 Shading Inclusion
- •13.7.3.2 Empty Space Skipping
- •13.8 Discussion and Outlook
- •References
- •14.1 Introduction
- •14.1.1 Magnetic Resonance Imaging
- •14.1.2 Compressed Sensing
- •14.1.3 The Role of Prior Knowledge
- •14.2 Sparsity in MRI Images
- •14.2.1 Characteristics of MR Images (Prior Knowledge)
- •14.2.2 Choice of Transform
- •14.2.3 Use of Data Ordering
- •14.3 Theory of Compressed Sensing
- •14.3.1 Data Acquisition
- •14.3.2 Signal Recovery
- •14.4 Progress in Sparse Sampling for MRI
- •14.4.1 Review of Results from the Literature
- •14.4.2 Results from Our Work
- •14.4.2.1 PECS
- •14.4.2.2 SENSECS
- •14.4.2.3 PECS Applied to CE-MRA
- •14.5 Prospects for Future Developments
- •References
- •15.1 Introduction
- •15.2 Acquisition of DT Images
- •15.2.1 Fundamentals of DTI
- •15.2.2 The Pulsed Field Gradient Spin Echo (PFGSE) Method
- •15.2.3 Diffusion Imaging Sequences
- •15.2.4 Example: Anisotropic Diffusion of Water in the Eye Lens
- •15.2.5 Data Acquisition
- •15.3 Digital Processing of DT Images
- •15.3.2 Diagonalization of the DT
- •15.3.3 Gradient Calibration Factors
- •15.3.4 Sorting Bias
- •15.3.5 Fractional Anisotropy
- •15.3.6 Other Anisotropy Metrics
- •15.4 Applications of DTI to Articular Cartilage
- •15.4.1 Bovine AC
- •15.4.2 Human AC
- •References
- •Index
28 |
C. Couprie et al. |
3.1 The Need for Seed-Driven Segmentation
Segmentation is a fundamental operation in computer vision and image analysis. It consists of identifying regions of interests in images that are semantically consistent. Practically, this may mean finding individual white blood cells amongst red blood cells; identifying tumors in lungs; computing the 4D hyper-surface of a beating heart, and so on.
Applications of segmentation methods are numerous. Being able to reliably and readily characterize organs and objects allows practitioners to measure them, count them and identify them. Many images analysis problems begin by a segmentation step, and so this step conditions the quality of the end results. Speed and ease of use are essential to clinical practice.
This has been known for quite some time, and so numerous segmentation methods have been proposed in the literature [57]. However, segmentation is a difficult problem. It usually requires high-level knowledge about the objects under study. In fact, semantically consistent, high-quality segmentation, in general, is a problem that is indistinguishable from strong Artificial Intelligence and has probably no exact or even generally agreeable solution. In medical imaging, experts often disagree amongst themselves on the placement of the 2D contours of normal organs, not to mention lesions. In 3D, obtaining expert opinion is typically difficult, and almost impossible if the object under study is thin, noisy and convoluted, such as in the case of vascular systems. At any rate, segmentation is, even for humans, a difficult, time-consuming and error-prone procedure.
3.1.1 Image Analysis and Computer Vision
Segmentation can be studied from many angles. In computer vision, the segmentation task is often seen as a low-level operation, which consists of separating an arbitrary scene into reasonably alike components (such as regions that are consistent in terms of color, texture and so on). The task of grouping such component into semantic objects is considered a different task altogether. In contrast, in image analysis, segmentation is a high-level task that embeds high-level knowledge about the object.
This methodological difference is due to the application field. In computer vision, the objective of segmentation (and grouping) is to recognize objects in an arbitrary scene, such as persons, walls, doors, sky, etc. This is obviously extremely difficult for a computer, because of the generality of the context, although humans do generally manage it quite well. In contrast, in image analysis, the task is often to precisely delineate some objects sought in a particular setting known in advance. It might be for instance to find the contours of lungs in an X-ray photograph.
3 Seeded Segmentation Methods for Medical Image Analysis |
29 |
The segmentation task in image analysis is still a difficult problem, but not to the same extent as in the general vision case. In contrast to the vision case, experts might agree that a lesion is present on a person’s skin, but may disagree on its exact contours [45]. Here, the problem is that the boundary between normal skin and lesion might be objectively difficult to specify. In addition, sometimes there does exist an object with a definite physical contour (such as the inner volume of the left ventricle of the heart). However, imaging modalities may be corrupted by noise and partial volume effects to an extent that delineating the precise contours of this physical object in an image is also objectively difficult.
3.1.2 Objects Are Semantically Consistent
However, in spite of these difficulty, we may assume that, up to some level of ambiguity, an object (organ, lesion, etc) may still be specified somehow. This means that semantically, an object possess some consistency. When we point at a particular area on an image, we expect to be, again with some fuzziness, either inside or outside the object
This leads us to the realize that there must exist some mathematical indicator function, that denotes whether we are inside or outside of the object with high probability. This indicator function can be considered like a series of constraints, or labels. They are sometimes called seeds or markers, as they provide starting points for segmentation procedures, and they mark where objects are and are not.
In addition, a metric that expresses the consistency of the object is likely to exist. A gradient on this metric may therefore provide object contour information. Contours may be weak in places where there is some uncertainty, but we assume they are not weak everywhere (else we have an ambiguity problem, and our segmentation cannot be precise). The metric may simply be the image intensity or color, but it may express other information like consistency of texture for instance. Even though this metric may contain many descriptive elements (as a vector of descriptors for instance), we assume that we are still able to compute a gradient on this metric [61].
This is the reason why many segmentation methods focus on contours, which are essentially discontinuities in the metric. Those that focus on regions do so by defining and utilizing some consistency metric, which is the same problem expressed differently.
The next and final step for segmentation is the actual contour placement, which is equivalent to object delineation. This step can be considered as an optimization problem, and this is the step on which segmentation methods in the literature focus the most. We will say more about this in Sect. 3.2 listing some image segmentation categories.
30 |
C. Couprie et al. |
3.1.3 A Separation of Powers
In summary, to achieve segmentation in the analysis framework, we need three ingredients: (1) an indicator function that denotes whether we are inside or outside of the object of interest; (2) a metric from which we may derive contour information, and (3) an optimization method for placing the contour accurately.
To achieve accuracy, we need flexibility and robustness. Some have argued that it is useful to treat these three steps separately. This was first described in [47]) as the morphological method, but is also called by others interactive or seeded segmentation [31]. In this context, this does not mean that user interaction is required, only that object identification is provided by some means, and contour extraction separately by a segmentation operator.
The first ingredient, the object identification, or our indicator function, is of course essential and it is frustrating to be obliged to only write here “by some means”. Accurate content identification can simplify the requirements on the segmentation operator greatly. Unfortunately, the means in question for contents identification are problem-dependent and sometimes difficult to publish, because they are often seen as ad hoc and of limited interest beyond their immediate use in the problem at hand. Fortunately, some journals accept such publications, such as the
Journal of Image Analysis and Stereology and applications journals (e.g. Journal of Microscopy, materials, etc). There are also a few recent books on the matter [23,52]. Software libraries are also important but not many are freely available for training, although the situation is improving.
Also, whereas in computer vision a fully automated solution is required, in medical imaging a semi-automated method might be sufficient. In biomedical imaging, a large number of objects are typically measured (such as cells, organelles, etc.), and a fully automated method is often desirable. However, in medical imaging, typically a relatively small number of patients is being monitored, treated or surveyed, and so human-guided segmentation can be sufficient. The objective of the segmentation method in this context is to provide reasonable contours quickly, which can be adjusted easily by an operator.
In this variety of contexts, is it possible to define precisely the segmentation problem? The answer is probably no, at this stage at least in image analysis research. However, it is possible to provide formulations of the problem. While this may sound strange or even suspicious, the reason is that there exists a real need for automated or semi-automated segmentation procedures for both image analysis and computer vision, and so solutions have been proposed. They can still be explained, compared and evaluated.
3.1.4 Desirable Properties of Seeded Segmentation Methods
We come to the first conclusion that to provide reliable and accurate results, we must rely on a segmentation procedure and not just an operator. Object identification
