Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Kluwer - Handbook of Biomedical Image Analysis Vol

16.35 Mб


Our goal is to develop automated methods for the segmentation of threedimensional biomedical images. Here, we describe the segmentation of confocal microscopy images of bee brains (20 individuals) by registration to one or several atlas images. Registration is performed by a highly parallel implementation of an entropy-based nonrigid registration algorithm using B-spline transformations. We present and evaluate different methods to solve the correspondence problem in atlas based registration. An image can be segmented by registering it to an individual atlas, an average atlas, or multiple atlases. When registering to multiple atlases, combining the individual segmentations into a final segmentation can be achieved by atlas selection, or multiclassifier decision fusion. We describe all these methods and evaluate the segmentation accuracies that they achieve by performing experiments with electronic phantoms as well as by comparing their outputs to a manual gold standard.

The present work is focused on the mathematical and computational theory behind a technique for deformable image registration termed Hyperelastic Warping, and demonstration of the technique via applications in image registration and strain measurement. The approach combines well-established principles of nonlinear continuum mechanics with forces derived directly from threedimensional image data to achieve registration. The general approach does not require the definition of landmarks, fiducials, or surfaces, although it can accommodate these if available. Representative problems demonstrate the robust and flexible nature of the approach.

Three-dimensional registration methods are introduced for registering MRI volumes of the pelvis and prostate. The chapter first reviews the applications,




challenges, and previous methods of image registration in the prostate. Then the chapter describes a three-dimensional mutual information rigid body registration algorithm with special features. The chapter also discusses the threedimensional nonrigid registration algorithm. Many interactively placed control points are independently optimized using mutual information and a thin plate spline transformation is established for the warping of image volumes. Nonrigid method works better than rigid body registration whenever the subject position or condition is greatly changed between acquisitions.

This chapter will cover 1D, 2D, and 3D registration approaches both rigid and elastic. Mathematical foundation for surface and volume registration approaches will be presented. Applications will include plastic surgery, lung cancer, and multiple sclerosis.

Flow-mediated dilation (FMD) offers a mechanism to characterize endothelial function and therefore may play a role in the diagnosis of cardiovascular diseases. Computerized analysis techniques are very desirable to give accuracy and objectivity to the measurements. Virtually all methods proposed up to now to measure FMD rely on accurate edge detection of the arterial wall, and they are not always robust in the presence of poor image quality or image artifacts. A novel method for automatic dilation assessment based on a global image analysis strategy is presented. We model interframe arterial dilation as a superposition of a rigid motion model and a scaling factor perpendicular to the artery. Rigid motion can be interpreted as a global compensation for patient and probe movements, an aspect that has not been sufficiently studied before. The scaling factor explains arterial dilation. The ultrasound (US) sequence is analyzed in two phases using image registration to recover both transformation models. Temporal continuity in the registration parameters along the sequence is enforced with a Kalman filter since the dilation process is known to be a gradual physiological phenomenon. Comparing automated and gold standard measurements we found a negligible bias (0.04%) and a small standard deviation of the differences (1.14%). These values are better than those obtained from manual measurements (bias = 0.47%, SD = 1.28%). The proposed method offers also a better reproducibility (CV = 0.46%) than the manual measurements (CV =


This chapter will focus on nonrigid registration techniques. Nonrigid registration is needed to correct for deformations that occur in various contexts: respiration or organ motion, disease progression over time, tissue deformation



due to surgical procedure, intersubject comparison to build anatomical atlases, etc. Numerous registration techniques have been developed and can be broadly decomposed into intensity-based (photometric) and landmark-based (geometrical) techniques. This chapter will present up-to-date methods.

This chapter will then present how segmentation and registration methods can cooperate: accurate and fast segmentation can be obtained using nonrigid registration; nonrigid registration methods can be constrained by segmentation methods. Results of these cooperation schemes will be given.

This chapter will finally be concerned with validation of nonrigid registration methods. More specifically, an objective evaluation framework will be presented in the particular context of intersubject registration.

This chapter concerns elastic image registration for biomedical applications. We start with an overview and classification of existing registration techniques. We revisit the landmark interpolation and add some generalisations. We develop a general elastic image registration algorithm. It uses a grid of uniform B-splines to describe the deformation. It also uses B-splines for image interpolation. Multiresolution in both image and deformation model spaces yields robustness and speed. We show various applications of the algorithm on MRI, CT, SPECT and ultrasound data. A semiautomatic version of the registration algorithm is capable of accepting expert hints in the form of soft landmark constraints.

The chapter will include algorithms based on landmark and intensity-based image registration. It will compare traditional unidirectional registration algorithms to those that are bidirectional and minimize the inverse consistency error. It will discuss how small deformation models can nonrigidly be used for medical image registration in the brain, skull, and inner ear. It will also discuss how to extend the small deformation model to the large deformation model to accommodate locally large deformation image registration problems. We will provide examples using phantom images and brain images to demonstrate the large deformation case.


1. Medical Image Registration: Theory, Algorithm, and Case Studies in



Surgical Simulation, Chest Cancer, and Multiple Sclerosis . . . . . . . . . . . . .



Aly A. Farag, Sameh M. Yamany, Jeremy Nett, Thomas Moriarty,



Ayman El-Baz, Stephen Hushek, and Robert Falk


2. State of the Art of Level Set Methods in Segmentation and Registration



of Medical Imaging Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Elsa Angelini, Yinpeng Jin, and Andrew Laine


3. Three-Dimensional Rigid and Non-Rigid Image Registration for the Pelvis



and Prostate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Baowei Fei, Jasjit Suri, and David L. Wilson


4. Stereo and Temporal Retinal Image Registration by Mutual Information



Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Xiao-Hong Zhu, Cheng-Chang Lu, and Yang-Ming Zhu


5. Quantification of Brain Aneurysm Dimensions from CTA for Surgical



Planning of Coiling Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .




Hern´andez, Alejandro F. Frangi, and Guillermo Sapiro



Inverse Consistent Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



G. E. Christensen



A Computer-Aided Design System for Segmentation of Volumetric Images



Marcel Jackowski and Ardeshir Goshtasby


8. Inter-subject Non-rigid Registration: An Overview with Classification and



the Romeo Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Pierre Hellier






Elastic Registration for Biomedical Applications . . . . . . . . . . . . . . . . . . . .



Jan Kybic and Michael Unser



Cross-Entropy, Reversed Cross-Entropy, and Symmetric Divergence



Similarity Measures for 3D Image Registration: A Comparative Study . . .



Yang-Ming Zhu and Steven M. Cochoff



Quo Vadis, Atlas-Based Segmentation? . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Torsten Rohlfing, Robert Brandt, Randolf Menzel, Daniel B. Russakoff,



and Calvin R. Maurer, Jr.



Deformable Image Registration with Hyperelastic Warping . . . . . . . . . . . .



Alexander I. Veress, Nikhil Phatak, and Jeffrey A. Weiss



Future of Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Jasjit Suri, Baowei Fei, David Wilson, Swamy Laxminarayan,



Chi-Hsiang Lo, Yujun Guo, Cheng-Chang Lu, and Chi-Hua Tung


The Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Chapter 1

Medical Image Registration: Theory,

Algorithm, and Case Studies in Surgical

Simulation, Chest Cancer, and

Multiple Sclerosis

Aly A. Farag,1 Sameh M. Yamany,2 Jeremy Nett,1 Thomas Moriarty,3

Ayman El-Baz,1 Stephen Hushek,4 and Robert Falk5

1.1 Introduction

Registration found its application in medical imaging due to the fact that physicians are frequently confronted with the practical problem of registering medical images. Medical registration techniques have recently been extended to relate multimodal images which makes it possible to superimpose features from different imaging studies. Registration techniques have been also used in stereotactic surgery and stereotactic radiosurgery that require images to be registered with the physical space occupied by the patient during surgery. New interactive, image-guided surgery techniques use image-to-physical space registration to track the changing surgical position on a display of the preoperative image sets of the patient. In such applications, the speed of convergence of the registration technique is of major importance.

1 Computer Vision and Image Processing Laboratory, Department of Electrical and Com-

puter Engineering, University of Louisville, Louisville, KY 40292, USA

2 System and Biomedical Engineering Department, Cairo University, Giza, Egypt 3 Department of Neurological Surgery, University of Louisville, KY 40292, USA

4 MRI Department, Norton Hospital, Louisville, KY, USA

5 Medical Imaging Division, Jewish Hospital, Louisville, KY, USA




Farag et al.

Table 1.1: Most important nomenclature used throughout the chapter







Vector function denoting a point on the model surface



Vector function denoting a point on the experimental surface



General surface



Transformation matrix



Rotation matrix



Translation vector


d(yi, S)

The distance of point yi to shape S



Registration objective function



The closet point operator



Grid Closest Point transform



3D space subset



Displacement vector in the GCP grid


{R, C, H}

Coordinates of the GCP grid



Grid resolution



Grid cell



Centroid of the cell Cijk

α, β, and γ

3D-angles of rotations



Simplex mesh angle



3D point on a free-form surface



Mean curvature of the surface



Normal vector at point P



Set of landmarks



Curvature threshold



Matching value







Overlap ratio



Scale factor



A medical volume



Entropy function


R f

A reference medical volume.





Another example of the use of medical image registration is in neurosurgery where it is useful to identify tumors with magnetic resonance images (MRI), yet the established stereotaxy technology uses computed tomography (CT) images. Being able to register these two modalities allows one to transfer the coordinates of tumors from the MR images into the CT stereotaxy. It is similarly useful to transfer functional information from SPECT or positron-emission tomography (PET) into MR or CT for anatomical reference, and for stereotactic exploitation.

The currently used imaging modalities can be generally divided into two main categories, one related to the anatomy being imaged and the other to the functionality represented in the image. The first one includes X-ray, CT

Medical Image Registration


(computed tomography), MRI (magnetic resonance imaging), US (ultrasound), portal images, and (video) sequences obtained by various catheter scopes, e.g., by laparoscopy or laryngoscopy. Other prominent derivative techniques include, MRA (magnetic resonance angiography), DSA (digital subtraction angiography, derived from X-ray), CTA (computed tomography angiography), and Doppler (derived from US, referring to the Doppler effect measured). Functional modalities include (planar) scintigraphy, SPECT (single photon emission computed tomography), PET (positron emission tomography), which together make up the nuclear medicine imaging modalities, and fMRI (functional MRI).

An eminent example of the use of registering different modalities can be found in the area of epilepsy surgery [1]. Patients may undergo various MR, CT, and DSA studies for anatomical reference; ictal and interictal SPECT studies; MEG and extra and/or intra-cranial (subdural or depth) EEG, as well as PET studies. Registration of the images from practically any combination will benefit the surgeon. A second example concerns radiotherapy treatment, where both CT and MR can be employed. The former is needed to accurately compute the radiation dose, while the latter is usually better suited for delineation of tumor tissue.

Yet, a more prominent example is the use of medical registration for the same modality, i.e., monomodale registration. For example, in the qualitative evaluation of multiple sclerosis (MS) studies, where multiple MRI scans of the same patient at different times must be compared with one another. Due to the largely arbitrary positioning of the anatomy in the scanner, in a slice-by- slice comparison between studies, quite different anatomy can by chance be located on the same slice numbers in different studies. The goal of registration, therefore, is to align the anatomy from one scan, to the anatomy from another.

Medical registration spans numerous applications and there exists a large score of different techniques reported in the literature. In what follows is an attempt to classify these different techniques and categorize them based on some criteria.

1.2 Medical Registration Classifications

The classification of registration methods used in this chapter is based on the criteria formulated by van den Elsen , Pol and Viergever [2]. Maintz and Viergever


Farag et al.

[1] provided a good survey on different classification criteria. In this section we will summarize seven basic classification criteria commonly used (for more details and further reading see the Maintz and Viergever review).

The seven criteria are:


2.Nature of registration basis

3.Nature of transformation


5.Optimization procedure

6.Modalities involved


1.2.1 Dimensionality

The main division here is either the scope of the registration involves spatial dimension only or is time series also involved. For spacial registration, there are the (i) 3D/3D registration where two or more volumes of interest are to be aligned together, the (ii) 2D/2D registration where two medical images are to be aligned together. In general, 2D/2D registration is less complex than the 3D/3D registration. A more complex one is the (iii) 2D/3D registration which involves the direct alignment of spatial data to projective data (e.g., a preoperative CT image to an intraoperative X-ray image), or the alignment of a single tomographic slice to spatial data. Time can be another dimension involved when the patient’s images and volumes are to be tracked with time for analysis or monitoring.

1.2.2 Nature of Registration Basis

In this category registration can be divided into extrinsic, i.e., based on foreign objects introduced into the imaged space, and intrinsic methods, i.e., based on the image information as generated by the patient. Extrinsic methods rely on artificial objects attached to the patient, objects which are designed to be well visible and accurately detectable in all of the pertinent modalities. As such,

Medical Image Registration


the registration of the acquired images is comparatively easy, fast, can usually be automated, and, since the registration parameters can often be computed explicitly, has no need for complex optimization algorithms. The main drawback of extrinsic registration is that, for good accuracy, invasive maker (e.g., stereotactic frame or screw markers) objects are used. Non-invasive markers (e.g., skin markers individualized foam moulds, head holder frames, or dental adapters) can be used, but as a rule are less accurate.

Intrinsic registration can rely on landmarks in the images or volumes to be aligned. These landmarks can be anatomical based on morphological points on some visible anatomical organ(s), or pure geometrical based. Intrinsic registration can also be based on segmentation results. Segmentation in this case can be rigid where anatomically the same structures (mostly surfaces) are extracted from both images to be registered, and used as sole input for the alignment procedure. They can also be deformable model based where an extracted structure (also mostly surfaces, and curves) from one image is elastically deformed to fit the second image. The rigid model based approaches are probably the most popular methods due to its easy implementation and fast results. A drawback of rigid segmentation based methods is that the registration accuracy is limited to the accuracy of the segmentation step. In theory, rigid segmentation based registration is applicable to images of many areas of the body, yet, in practice, the application areas have largely been limited to neuroimaging and orthopedic imaging.

Another example of intrinsic registration are the voxel based registration methods that operate directly on the image gray values, without prior data reduction by the user or segmentation. There are two distinct approaches: the first is to immediately reduce the image gray value content to a representative set of scalars and orientations, the second is to use the full image content throughout the registration process. Principal axes and moments based methods are the prime examples of reductive registration methods. Within these methods the image center of gravity and its principal orientations (principal axes) are computed from the image zeroth and first order moments. Registration is then performed by aligning the center of gravity and the principal orientations. The result is usually not very accurate, and the method is not equipped to handle differences in scanned volume well. Despite its drawbacks, principal axes methods are widely used in registration problems that require no high accuracy, because of the automatic and very fast nature of its use, and the easy implementation. On the other hand, voxel based registration using full image content is more