
- •Biological and Medical Physics, Biomedical Engineering
- •Medical Image Processing
- •Preface
- •Contents
- •Contributors
- •1.1 Medical Image Processing
- •1.2 Techniques
- •1.3 Applications
- •1.4 The Contribution of This Book
- •References
- •2.1 Introduction
- •2.2 MATLAB and DIPimage
- •2.2.1 The Basics
- •2.2.2 Interactive Examination of an Image
- •2.2.3 Filtering and Measuring
- •2.2.4 Scripting
- •2.3 Cervical Cancer and the Pap Smear
- •2.4 An Interactive, Partial History of Automated Cervical Cytology
- •2.5 The Future of Automated Cytology
- •2.6 Conclusions
- •References
- •3.1 The Need for Seed-Driven Segmentation
- •3.1.1 Image Analysis and Computer Vision
- •3.1.2 Objects Are Semantically Consistent
- •3.1.3 A Separation of Powers
- •3.1.4 Desirable Properties of Seeded Segmentation Methods
- •3.2 A Review of Segmentation Techniques
- •3.2.1 Pixel Selection
- •3.2.2 Contour Tracking
- •3.2.3 Statistical Methods
- •3.2.4 Continuous Optimization Methods
- •3.2.4.1 Active Contours
- •3.2.4.2 Level Sets
- •3.2.4.3 Geodesic Active Contours
- •3.2.5 Graph-Based Methods
- •3.2.5.1 Graph Cuts
- •3.2.5.2 Random Walkers
- •3.2.5.3 Watershed
- •3.2.6 Generic Models for Segmentation
- •3.2.6.1 Continuous Models
- •3.2.6.2 Hierarchical Models
- •3.2.6.3 Combinations
- •3.3 A Unifying Framework for Discrete Seeded Segmentation
- •3.3.1 Discrete Optimization
- •3.3.2 A Unifying Framework
- •3.3.3 Power Watershed
- •3.4 Globally Optimum Continuous Segmentation Methods
- •3.4.1 Dealing with Noise and Artifacts
- •3.4.2 Globally Optimal Geodesic Active Contour
- •3.4.3 Maximal Continuous Flows and Total Variation
- •3.5 Comparison and Discussion
- •3.6 Conclusion and Future Work
- •References
- •4.1 Introduction
- •4.2 Deformable Models
- •4.2.1 Point-Based Snake
- •4.2.1.1 User Constraint Energy
- •4.2.1.2 Snake Optimization Method
- •4.2.2 Parametric Deformable Models
- •4.2.3 Geometric Deformable Models (Active Contours)
- •4.2.3.1 Curve Evolution
- •4.2.3.2 Level Set Concept
- •4.2.3.3 Geodesic Active Contour
- •4.2.3.4 Chan–Vese Deformable Model
- •4.3 Comparison of Deformable Models
- •4.4 Applications
- •4.4.1 Bone Surface Extraction from Ultrasound
- •4.4.2 Spinal Cord Segmentation
- •4.4.2.1 Spinal Cord Measurements
- •4.4.2.2 Segmentation Using Geodesic Active Contour
- •4.5 Conclusion
- •References
- •5.1 Introduction
- •5.2 Imaging Body Fat
- •5.3 Image Artifacts and Their Impact on Segmentation
- •5.3.1 Partial Volume Effect
- •5.3.2 Intensity Inhomogeneities
- •5.4 Overview of Segmentation Techniques Used to Isolate Fat
- •5.4.1 Thresholding
- •5.4.2 Selecting the Optimum Threshold
- •5.4.3 Gaussian Mixture Model
- •5.4.4 Region Growing
- •5.4.5 Adaptive Thresholding
- •5.4.6 Segmentation Using Overlapping Mosaics
- •5.6 Conclusions
- •References
- •6.1 Introduction
- •6.2 Clinical Context
- •6.3 Vessel Segmentation
- •6.3.1 Survey of Vessel Segmentation Methods
- •6.3.1.1 General Overview
- •6.3.1.2 Region-Growing Methods
- •6.3.1.3 Differential Analysis
- •6.3.1.4 Model-Based Filtering
- •6.3.1.5 Deformable Models
- •6.3.1.6 Statistical Approaches
- •6.3.1.7 Path Finding
- •6.3.1.8 Tracking Methods
- •6.3.1.9 Mathematical Morphology Methods
- •6.3.1.10 Hybrid Methods
- •6.4 Vessel Modeling
- •6.4.1 Motivation
- •6.4.1.1 Context
- •6.4.1.2 Usefulness
- •6.4.2 Deterministic Atlases
- •6.4.2.1 Pioneering Works
- •6.4.2.2 Graph-Based and Geometric Atlases
- •6.4.3 Statistical Atlases
- •6.4.3.1 Anatomical Variability Handling
- •6.4.3.2 Recent Works
- •References
- •7.1 Introduction
- •7.2 Linear Structure Detection Methods
- •7.3.1 CCM for Imaging Diabetic Peripheral Neuropathy
- •7.3.2 CCM Image Characteristics and Noise Artifacts
- •7.4.1 Foreground and Background Adaptive Models
- •7.4.2 Local Orientation and Parameter Estimation
- •7.4.3 Separation of Nerve Fiber and Background Responses
- •7.4.4 Postprocessing the Enhanced-Contrast Image
- •7.5 Quantitative Analysis and Evaluation of Linear Structure Detection Methods
- •7.5.1 Methodology of Evaluation
- •7.5.2 Database and Experiment Setup
- •7.5.3 Nerve Fiber Detection Comparison Results
- •7.5.4 Evaluation of Clinical Utility
- •7.6 Conclusion
- •References
- •8.1 Introduction
- •8.2 Methods
- •8.2.1 Linear Feature Detection by MDNMS
- •8.2.2 Check Intensities Within 1D Window
- •8.2.3 Finding Features Next to Each Other
- •8.2.4 Gap Linking for Linear Features
- •8.2.5 Quantifying Branching Structures
- •8.3 Linear Feature Detection on GPUs
- •8.3.1 Overview of GPUs and Execution Models
- •8.3.2 Linear Feature Detection Performance Analysis
- •8.3.3 Parallel MDNMS on GPUs
- •8.3.5 Results for GPU Linear Feature Detection
- •8.4.1 Architecture and Implementation
- •8.4.2 HCA-Vision Features
- •8.4.3 Linear Feature Detection and Analysis Results
- •8.5 Selected Applications
- •8.5.1 Neurite Tracing for Drug Discovery and Functional Genomics
- •8.5.2 Using Linear Features to Quantify Astrocyte Morphology
- •8.5.3 Separating Adjacent Bacteria Under Phase Contrast Microscopy
- •8.6 Perspectives and Conclusions
- •References
- •9.1 Introduction
- •9.2 Bone Imaging Modalities
- •9.2.1 X-Ray Projection Imaging
- •9.2.2 Computed Tomography
- •9.2.3 Magnetic Resonance Imaging
- •9.2.4 Ultrasound Imaging
- •9.3 Quantifying the Microarchitecture of Trabecular Bone
- •9.3.1 Bone Morphometric Quantities
- •9.3.2 Texture Analysis
- •9.3.3 Frequency-Domain Methods
- •9.3.4 Use of Fractal Dimension Estimators for Texture Analysis
- •9.3.4.1 Frequency-Domain Estimation of the Fractal Dimension
- •9.3.4.2 Lacunarity
- •9.3.4.3 Lacunarity Parameters
- •9.3.5 Computer Modeling of Biomechanical Properties
- •9.4 Trends in Imaging of Bone
- •References
- •10.1 Introduction
- •10.1.1 Adolescent Idiopathic Scoliosis
- •10.2 Imaging Modalities Used for Spinal Deformity Assessment
- •10.2.1 Current Clinical Practice: The Cobb Angle
- •10.2.2 An Alternative: The Ferguson Angle
- •10.3 Image Processing Methods
- •10.3.1 Previous Studies
- •10.3.2 Discrete and Continuum Functions for Spinal Curvature
- •10.3.3 Tortuosity
- •10.4 Assessment of Image Processing Methods
- •10.4.1 Patient Dataset and Image Processing
- •10.4.2 Results and Discussion
- •10.5 Summary
- •References
- •11.1 Introduction
- •11.2 Retinal Imaging
- •11.2.1 Features of a Retinal Image
- •11.2.2 The Reason for Automated Retinal Analysis
- •11.2.3 Acquisition of Retinal Images
- •11.3 Preprocessing of Retinal Images
- •11.4 Lesion Based Detection
- •11.4.1 Matched Filtering for Blood Vessel Segmentation
- •11.4.2 Morphological Operators in Retinal Imaging
- •11.5 Global Analysis of Retinal Vessel Patterns
- •11.6 Conclusion
- •References
- •12.1 Introduction
- •12.1.1 The Progression of Diabetic Retinopathy
- •12.2 Automated Detection of Diabetic Retinopathy
- •12.2.1 Automated Detection of Microaneurysms
- •12.3 Image Databases
- •12.4 Tortuosity
- •12.4.1 Tortuosity Metrics
- •12.5 Tracing Retinal Vessels
- •12.5.1 NeuronJ
- •12.5.2 Other Software Packages
- •12.6 Experimental Results and Discussion
- •12.7 Summary and Future Work
- •References
- •13.1 Introduction
- •13.2 Volumetric Image Visualization Methods
- •13.2.1 Multiplanar Reformation (2D slicing)
- •13.2.2 Surface-Based Rendering
- •13.2.3 Volumetric Rendering
- •13.3 Volume Rendering Principles
- •13.3.1 Optical Models
- •13.3.2 Color and Opacity Mapping
- •13.3.2.2 Transfer Function
- •13.3.3 Composition
- •13.3.4 Volume Illumination and Illustration
- •13.4 Software-Based Raycasting
- •13.4.1 Applications and Improvements
- •13.5 Splatting Algorithms
- •13.5.1 Performance Analysis
- •13.5.2 Applications and Improvements
- •13.6 Shell Rendering
- •13.6.1 Application and Improvements
- •13.7 Texture Mapping
- •13.7.1 Performance Analysis
- •13.7.2 Applications
- •13.7.3 Improvements
- •13.7.3.1 Shading Inclusion
- •13.7.3.2 Empty Space Skipping
- •13.8 Discussion and Outlook
- •References
- •14.1 Introduction
- •14.1.1 Magnetic Resonance Imaging
- •14.1.2 Compressed Sensing
- •14.1.3 The Role of Prior Knowledge
- •14.2 Sparsity in MRI Images
- •14.2.1 Characteristics of MR Images (Prior Knowledge)
- •14.2.2 Choice of Transform
- •14.2.3 Use of Data Ordering
- •14.3 Theory of Compressed Sensing
- •14.3.1 Data Acquisition
- •14.3.2 Signal Recovery
- •14.4 Progress in Sparse Sampling for MRI
- •14.4.1 Review of Results from the Literature
- •14.4.2 Results from Our Work
- •14.4.2.1 PECS
- •14.4.2.2 SENSECS
- •14.4.2.3 PECS Applied to CE-MRA
- •14.5 Prospects for Future Developments
- •References
- •15.1 Introduction
- •15.2 Acquisition of DT Images
- •15.2.1 Fundamentals of DTI
- •15.2.2 The Pulsed Field Gradient Spin Echo (PFGSE) Method
- •15.2.3 Diffusion Imaging Sequences
- •15.2.4 Example: Anisotropic Diffusion of Water in the Eye Lens
- •15.2.5 Data Acquisition
- •15.3 Digital Processing of DT Images
- •15.3.2 Diagonalization of the DT
- •15.3.3 Gradient Calibration Factors
- •15.3.4 Sorting Bias
- •15.3.5 Fractional Anisotropy
- •15.3.6 Other Anisotropy Metrics
- •15.4 Applications of DTI to Articular Cartilage
- •15.4.1 Bovine AC
- •15.4.2 Human AC
- •References
- •Index
214 |
M.A. Haidekker and G. Dougherty |
where E is the Euclidean dimension of the embedding space (for a two-dimensional image, E = 2 and therefore D = 4 −β /2). In a two-dimensional image, the value of D will be constrained to be between 2 (smooth) and 3 (rough), and for a projected image generated by Brownian motion (a special case of FBM), the value of D will be 2.5.
In practice, images are degraded by noise and blurring within a particular imaging device. Image noise adds to the roughness and results in an overestimate of the fractal dimension, whereas blurring results in an underestimate of the fractal dimension. A very important advantage of the power spectrum method is that it allows for correction of these two effects. The noise power can be obtained by scanning a water phantom under the same conditions, and can then be subtracted from the power spectrum of the noisy image. Image blurring can be described by the modulation transfer function (MTF) of the system that typically attenuates higher frequencies in an image. The effect of system blurring can be eliminated by dividing the measured power spectrum by the square of the MTF, obtained by scanning a very small object approximating a point. With these corrections, accurate estimates of the fractal dimension of CT images of trabecular bone have been obtained, enabling very small difference in texture to be distinguished [71].
9.3.4.2 Lacunarity
Lacunarity (from lacuna, meaning gap or cavity) is a less frequently used metric that describes the complex intermingling of the shape and distribution of gaps within an image; specifically, it quantifies the deviation of a geometric shape from translational invariance. Lacunarity was originally developed to describe a property of fractals [54, 72] to distinguish between textures of the same fractal dimension. However, lacunarity is not predicated on self-similarity and can be used to describe the spatial distribution of data sets with and without self-similarity [73]. Lacunarity is relatively insensitive to image boundaries, and is robust to the presence of noise and blurring within the image.
Lacunarity is most frequently computed as a function of a local neighborhood (i.e., moving window) of size r. To compute the lacunarity, we first define a “score” S(r, x, y) for each pixel, which is the sum of the pixel values inside the moving window centered on (x, y). The detailed derivation of the lacunarity L(r) can be found in [73]. Simplified, we obtain L(r) as
L(r) = |
σS |
(r) |
+ 1 = |
∑r=1 |
(S(r) − S(r)) |
+ 1 |
(9.12) |
|
2 |
|
|
N |
¯ |
2 |
|
|
|
|
|
|
S2(r) |
|
|
|
S2 |
(r) |
|
|
|
|
|
¯ |
|
|
|
|
2 |
|
|
where S(r) is the mean value of all S(r) and σS is the variance of S(r). |
|
Equation (9.12) reveals explicitly the relationship between lacunarity and the variance of the scores: Lacunarity relies on the variance of the scores, standardized by the square of the mean of the scores. The lacunarity, L(r) of an image at a
9 Medical Imaging in the Diagnosis of Osteoporosis... |
215 |
particular window size r uses all the scores obtained by exhaustively sampling the image. Thus, in general, as the window size r increases, the lacunarity will decrease, approaching unity whenever the window size approaches the image size (when there is only one measurement and the variance is consequently zero) – or for a spatially random (i.e., noisy) pattern, since the variance of the scores will be close to zero even for small window sizes. The lacunarity defined in (9.12) and its variants (including normalized lacunarity and grayscale lacunarity) are scaleinvariant but are not invariant to contrast and brightness transformations, so that histogram equalization of images is a necessary pre-processing step.
A plot of lacunarity against window size contains significant information about the spatial structure of an image at different scales. In particular, it can distinguish varying degrees of heterogeneity within an image, and in the case of a homogeneous image it can identify the size of a characteristic substructure. Hierarchically structured random images can be generated using curdling [54]. Higher lacunarity values are obtained when the window sizes are smaller than the scale of randomness, and images with the same degree of randomness at all levels (viz. self-similar fractals) are close to linear, where the slope is related to the fractal dimension. Specifically, the magnitude of the slope of the lacunarity plot for self-similar fractals is equal to D − E, where D and E are the fractal and Euclidean dimensions, respectively.
One problem with the lacunarity metric defined in (9.12) is that the vertical scaling is related to the image density, with sparse maps having higher lacunarities than dense maps for the same window size. This complicates the comparison of plots between images of different density. It is possible to formulate a normalized lacunarity whose decay is a function of clustering only and is independent from image density. A normalized lacunarity, NL(r), can be achieved by combining the lacunarity of an image, L(r) with the lacunarity of its complement, cL(r), which can assume values between 0 and 1 [71, 74]:
1 |
1 |
|
||
NL(r) = 2 − |
|
+ |
|
(9.13) |
L(r) |
cL(r) |
9.3.4.3 Lacunarity Parameters
Lacunarity plots, i.e., plots of L(r) over r, show how the lacunarity varies with scale. The plots monotonically decay to a value of unity at large scales, unless there is considerable periodicity in the image; in which case it can pass through some minima (corresponding to the repeat distance) and maxima as it falls to unity. Most real images, as opposed to synthetic images, will show only the monotonic decay. In image features with strict self-similarity, L(r) results in a straight-line plot from (0,1) to (1,0). If this line is seen as the neutral model, the deviation of the (normalized) lacunarity plots from the straight line, calculated as a percentage of the (normalized) lacunarity value, will emphasize subtle differences that are not conspicuous in the decay curves themselves and is useful in identifying size ranges for different tonal features [74]. Positive (negative) deviations indicate greater

216 |
M.A. Haidekker and G. Dougherty |
Fig. 9.9 Lacunarity plots for three sample textures, highly correlated noise (Perlin noise, dashed line labeled P), uncorrelated Gaussian noise (black circles, labeled GN), and the texture from a CT cross-section of spongy bone in a healthy lumbar vertebra (black diamonds, labeled S). Fitted curves (9.14) are shown in gray. The lacunarity for Perlin noise cannot be described by (9.14). For Gaussian noise, α = 1.5 and β = 0.007 was found, and for the spongiosa texture, α = 0.45 and β = 0.021
(lesser) spatial homogeneity than the underlying scale-invariant neutral (fractal) model. The presence of a prominent maximum would indicate the typical size of a structuring element in the image. Moreover, Lacunarity plots often resemble the plot of a power-law function, and Zaia et al. [75] have fitted non-normalized lacunarity plots from binary images to a function of the form
β |
+ γ |
|
L(r) = rα |
(9.14) |
where the parameters α , β , and γ are regression parameters that represent the order of the convergence of L(r), the magnitude (vertical scaling) of L(r), and the offset (vertical shift) of L(r), respectively. We have explored the fitting to monotonic normalized lacunarity plots, where the parameter γ can be conveniently set to unity, which is the value that NL(r) approaches at large scales. This simplifies the powerlaw fit to
β |
|
NL(r) − 1 = rα |
(9.15) |
Examples for lacunarity plots L(r) are shown in Fig. 9.9. The plots of L(r) and the curve fits with (9.14) are shown. Highly correlated noise (Perlin noise) cannot be described by (9.14). Conversely, both the uncorrelated noise and the texture of spongy bone in a cross-sectional CT slice show a good fit with (9.14) with R2 > 0.99 in both cases. The score S(r) rapidly reaches statistical stability for increasing r, and the variance of S over the location of a sliding window becomes very low. The texture of trabecular bone has a higher variance with the location of S, and its decay with increasing window size is slower. It becomes obvious that window sizes with r > 20 do not carry additional information in this example, where a window size was