Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / Computer-Aided Otorhinolaryngology-Head and Neck Surgery Citardi 2002

.pdf
Скачиваний:
226
Добавлен:
07.06.2016
Размер:
4.48 Mб
Скачать

392

Citardi and Kokoska

able in certain cases of congenital and acquired craniofacial anomalies. Our paradigm for measurement of the bony nasal pyramid can provide objective data for that monitoring. The paradigm also may provide guidance for the preoperative planning of craniofacial surgical procedures. The derived measurements may ultimately guide the design of nasal augmentation and reduction.

21.7CAS CEPHALOMETRICS CHALLENGES

The objective data provided by CAS cephalometrics requires high-quality thincut CT scan images. Poor image quality from any cause whatsoever will limit the accuracy of the measurements. For this reason, specific protocols that ensure the quality of the scans are necessary. Casual adherence to these protocols must be discouraged.

CAS cephalometrics requires the relative orientation of the subject in the image volume be maintained throughout image acquisition. For all of our protocols, axial CT plane was parallel to the Frankfort line, and the subject was not rotated within in this plane. Failure to maintain an appropriate orientation needlessly complicates the review of the resultant images. Furthermore, this deficiency will also lead to inaccurate measurements, since the definitions of the various parameters assume that the subject orientation is consistently in the ideal position.

It should be added that the above comments about orientation really apply only to currently available software. In theory, it may be possible to standardize the orientation via software manipulations. Future software may permit the imposition of a virtual frame to the CT data set. In this concept, the axial CT data would be used to reconstruct coronal and sagittal planar data as well as the threedimensional model, and the selection of specific anatomical points would serve to orient the virtual frame, which the software uses as a reference for reformatting the images. The reformatted images would represent the true axial plane, which would be parallel to the Frankfort plane, as well as the orthogonal coronal and sagittal planes. Because of the extent of the image manipulations, the raw CT data would need to be very high quality—probably, 1 mm contiguous slices would be necessary. Conceivably, overlapping slices would enhance the image reformatting.

The application of a standardized external frame, whose purpose and function would be similar to those of a traditional cephalostat, also may provide important orientation data. In this approach, the patient/subject would wear the frame during CT scan acquisition. The frame provides an external reference for CT image analysis. In this way, the frame functions mimics the function of the frames used in framed stereotaxy procedures. After the CT images have been transferred to the computer workstation, the software could maintain orientation relative to the frame (Figure 21.10).

The ultimate clinical utility of the information offered by CAS cephalomet-

Software-Enabled Cephalometrics

393

FIGURE 21.10 The SAVANT (CBYON, Palo Alto, CA) prototype head frame supports semiautomatic registration for sinus surgery. This screen capture shows the head frame in the reconstructed three-dimensional model. The SAVANT software can automatically recognize the positions of fiducial markers that are built into the headset. Because this head frame is designed so that it can be reproducibly placed on the patient’s head, it has a unique position relative to the underlying craniomaxillofacial skeleton. As a result, it may serve as reference parameters for the proper orientation of CT images. In this theoretical approach, CAS software may construct a virtual cephalostat for computer-aided cephalometrics.

rics cannot be readily determined. Intuitively, CAS cephalometrics can provide significant information; however, many issues need to be resolved. The exact parameters that depict useful anatomical relationships have not been established. In fact, specific items, such as SBR or MP, may eventually be replaced by other parameters that carry more clinical significance. The current proposal needs firm validation; as clinical experience with this technology, it is likely that the described approaches will undergo revision.

To date, CAS cephalometrics has been greatly influenced by the limitations of the current CAS software, which was designed for surgical navigation and

394

Citardi and Kokoska

preoperative planning. The software clearly has not been optimized for quantitative image analysis that cephalometrics requires. Despite this obvious problem, the current CAS software tools offer remarkable capabilities for quantitative studies of the craniomaxillofacial skeleton. It may be anticipated that future versions of the software will be better adapted for cephalometric-type analysis.

REFERENCES

1.Zide B, Grayson B, McCarthy JG. Cephalometric analysis: part I. Plast Reconstr Surg 1981; 68:816–823.

2.Zide B, Grayson B, McCarthy JG. Cephalometric analysis for upper and lower midface surgery: part II. Plast Reconstr Surg 1981; 68:961–968.

3.Zide B, Grayson B, McCarthy JG. Cephalometric analysis for mandibular surgery: part III. Plast Reconstr Surg 1982; 69:155–164.

4.Thurow RC. Cephalometric methods in research and private practice. Angle Orthod 1951; 21:104–116.

5.Hall DL, Bollen A-M. A comparison of sonically derived and traditional cephalometric values. Angle Orthod 1996; 67:365–372.

6.Richardson A. An investigation into the reproducibility of some points, planes and lines used in cephalometric analysis. Am J Orthod 1966; 52:637–651.

7.Baumrind S, Frantz RC. The reliability of head film measurements: 1. landmark identification. Am J Orthod 1971; 60:111–127.

8.Baumrind S, Frantz RC. The reliability of head film measurements: 2. conventional angular and linear measurements. Am J Orthod 1971; 60:505–517.

9.Gerbo LR, Poulton DR, Covell DA. A comparison of a computer-based orthognatic surgery prediction system to postsurgical results. Int J Adult Orthod Orthognathic Surg 1997; 12:55–63.

10.Aharon PA, Eisig S, Cisneros GJ. Surgical prediction reliability: a comparision of two computer software systems. Int J Adult Orthod Orthognathic Surg 1997; 12: 65–78.

11.Citardi MJ, Herrmann B, Hollenbeak C, et al. Comparison of scientific calipers and computer-enabled CT review for the measurement of skull-base and craniomaxillofacial dimensions. Skull Base Surgery 2001; 34:111–122.

12.Hardeman S, Kokoska MS, Bucholz R, et al. Computer-based CT scan analysis of craniomaxillofacial relationships. American Academy of Otolaryngology–Head and Neck Surgery Annual Meeting, Washington, DC, September 24–27, 2000.

13.Hardeman S, Citardi MJ, Stack BS, et al. Computer-aided reduction of zygomatic fractures. American Academy of Facial Plastic and Reconstructive Surgery Annual Meeting, New Orleans, LA, September 22–24, 1999.

14.Kokoska MS, Hardeman S, Bucholz R, et al. Computer-aided sequential reduction of frontozygomaticomaxillary fractures. American Academy of Facial Plastic and Reconstructive Surgery spring meeting, Orlando, FL, May 13, 2000.

15.Citardi MJ, Hardeman S, Hollenbeak C, et al. Computer-aided assessment of bony nasal pyramid dimensions. Arch Otolaryngol Head Neck Surg 2000; 126:979–984.

22

Computer-Aided Craniofacial

Surgery

Alex A. Kane, M.D.

Washington University School of Medicine, St. Louis, Missouri

Lun-Jou Lo, M.D.

Chang Gung Memorial Hospital and Chang Gung Medical College, Taipei, Taiwan

Jeffrey L. Marsh, M.D.

Washington University School of Medicine and St. Louis Children’s Hospital, St. Louis, Missouri

22.1INTRODUCTION

Before discussing the subject of computer-aided craniofacial surgery, it seems appropriate to address the need for a discrete chapter on craniofacial surgery within a text whose focus is otorhinolaryngology–head and neck surgery. The term ‘‘craniofacial surgery’’ has come to connote reconstruction of major congenital or acquired deformities of the skull and face, including orthognathic surgery, as opposed to surgery for infectious or neoplastic head and neck disorders. In the mid-1960s, Tessier introduced craniofacial surgery as a discipline when he described the first intra/extracranial correction of orbital hypertelorism. Subsequently, the domain of craniofacial surgery has been regionalized into anatomical zones based upon skeletal subunits and their associated soft tissues: the calvaria, the upper face (orbits, zygomas, and brows), the midface (maxillae and zygomas), and the lower face (mandible). A set of technical intraoperative maneuvers has come to also characterize craniofacial surgery; these technical features include the use of concealed incisions (coronal, conjunctival, intraoral), the intra/extracranial

395

396

Kane et al.

approach to the upper and midface, and deconstruction/reconstruction of the craniofacial skeleton [1]. The anatomical complexity of these types of operations and the absence of easy anatomical locators for mobilized osseous segments (such as the teeth for orthognathic surgery) has made craniofacial surgeons dependent upon medical imaging since the inception of the discipline. The addition of the computer to such imaging has facilitated understanding aberrant anatomy, increased operative safety, and improved operative results.

Computer-aided craniofacial surgery is a broad term, which has been applied to a variety of frequently overlapping activities. These related disciplines include visualization and mensuration of craniofacial anatomy, preoperative planning of surgical interventions, intraoperative image-based guidance of surgical therapy, and postoperative evaluation of therapy and growth. The computer has become an indispensable tool for the craniofacial surgeon. It allows the practitioner to more effectively understand the patient’s clinical problem, to share this understanding with both patient and colleague, to plan and execute an operation, and to obtain feedback regarding the outcome. The purpose of this chapter is to survey the current use of the computer-aided surgery (CAS) technology in all of these aspects of craniofacial surgery.

22.2VISUALIZATION AND MENSURATION OF CRANIOFACIAL ANATOMY

22.2.1 Data Acquisition and Processing

Surgeons have traditionally interacted with anatomy through the direct experiences of physical examination, intraoperative observation, and postmortem dissection. Contemporary computer-aided medical imaging has greatly expanded in vivo anatomical study through its conversion of the physical body into a digital data set. The processing of this raw 3D image data is essential to all of the applications of computers in craniofacial surgery. It is important to consider understanding how such data is acquired and assembled before its use by the craniofacial surgeon

The unique assistance lent to the craniofacial surgeon by the computer comes from its power to provide infinite custom anatomical images that are unlimited by restrictive point of view, by preservation of essential structures, by tissue density, and/or by destruction of tissue (such as actual cutting of living flesh or a corpse). Furthermore, the ability of contemporary computers to process image data rapidly has facilitated their incorporation into clinical practice.

The main source of in vivo anatomical data for the craniofacial surgeon has been the computed tomography (CT) scanner, although magnetic resonance imaging (MRI) is increasingly in use. The basic concepts of data assembly and segmentation are similar regardless of the source of the three-dimensional (3D) data.

Computer-Aided Craniofacial Surgery

397

To create the images, the CT scanner passes a beam of x-rays through tissue, then assigns a gray-scale value, called a Hounsfield value or CT intensity, to small quantities of tissue. Each of these small quantities are called pixels. Each image is composed of a fine grid of pixels. Many modern CT scanners create images consisting of a 512 512 pixel matrix. Each square pixel contains the Hounsfield number of a small quantity of the patient’s tissue. Hounsfield intensities range from approximately 1000 for air to about 3000 for dense bone. Water is always defined as having 0 intensity. The exact Hounsfield range differs slightly for each scanner.

22.2.2Three-Dimensional Image Volume Assembly

Initially CT and, later, MRI data were presented as ‘‘slice’’ images, which resembled the cross-sectional anatomy seen in the pathology laboratory. The triple presentation of whole body slice, CT scan slice, and MRI slice used to study and display the ‘‘visible’’ man and woman documents the basic ‘‘slice’’ display modality. However, since surgeons, anthropologists, anatomists, and dysmorphologists usually interact with humans as surfaces and volumes, display of CT data as either surfaces or volumes has facilitated their incorporation into clinical and research roles. Such reformatted data has come to be known as 3D CT (or 3D MRI, if the data source is the MRI scanner).

It is important to clarify the term ‘‘3D’’ in computer-aided medical imaging. In this context, 3D refers to the data and not the presentation of that data. Data with x,y,z coordinates are 3D data, but that same data displayed on a computer screen are a two-dimensional (2D) representation of that data. This distinction is more than semantic and is easily communicated with an example. An artist, when depicting a space-filling subject, can do so by a drawing or by sculpture. A sketch is a 2D representation of a 3D surface, while a sculpture is a 3D representation of a 3D surface. A photograph of a drawing is a 2D representation of a 2D surface, while the photograph of the sculpture of the same model is a 2D representation of a 3D surface. Whereas a drawing needs to be redone every time the artist wants to render the subject from a different viewing position, the sculptor need only change the position of the sculpture to get the new effect. CT and MRI scanners capture 3D data, and software can be used to render these data from any viewpoint. These 2D renderings are analogous to having the ability to photograph a sculpture from an arbitrary viewpoint without the geometric distortion that photography introduces. CT and MRI image data can also be presented three-dimensionally using visualization technology such as holography or stereoimages or, more conveniently, as life-sized models.

The first task in CT and MRI image data postprocessing (image data manipulation after initial acquisition and storage of planar scan images) is to prepare a 3D volume of data from the set of standard 2D image data that a scanner

398

Kane et al.

routinely outputs. In our institutions, CT scans are acquired according to a standardized protocol, where contiguous, nonoverlapping, chin to vertex images are taken, each in a plane that is parallel to the orbitomeatal plane. The actual number of images per scan varies proportionately with patient size, the matrix size in which the data are stored, and the amount that the scanner table moves between each newly acquired image. For patients who are older than 6 months, images are taken every 2 mm, while younger infants have images taken with every 1 mm advance of the table. Using this protocol, a typical head CT scan contains 80–180 images. The data for the images in the series are stored in files on the hard drive of the computer that controls the scanner.

Previously, the exact format in which the images were stored varied tremendously, not only among scanners made by different manufacturers, but also by generations of the same manufacturer. This inconsistency caused considerable logistical difficulties for postprocessing. Now with the nearly universal acceptance of the DICOM data format [2], most software packages can read the image data generated by even different equipment makers. Once acquired, the image data are transferred to the postprocessing computer workstation, which runs a software package to manipulate the data. Data transfer from the acquisition site in the radiology department to the postprocessing site is today usually achieved over a network. The transfer task has been greatly facilitated by the installation of image archiving systems (PACS systems) at many hospitals, which can route DICOM data automatically or semi-automatically [3]. A number of commercial 3D surface and volumetric CT/MR imaging software packages exist. Although each of these applications offers varying capabilities, all of these software packages are similar in that they take as their input the raw image data that are the output of the scanner. Until recently, many of the software packages available for manipulating images only could be run on expensive, specialized, UNIXbased computer workstations. This situation has changed dramatically in the past few years as the power of personal computers has increased as has the number of imaging packages for these less expensive platforms.

The key concept for understanding the creation of a 3D volume from a set of 2D images is the pixel-to-voxel transition. A voxel is a cube of tissue, rather than a square of tissue like the pixel. The computer uses a mathematical algorithm called trilinear interpolation to add a third dimension to pixels, thereby expanding them to create cubic voxels. Thus, the paradigm shifts from a Hounsfield value in a two-dimensional square to a Hounsfield value in a three-dimensional cube. Schematically, 2D images are transformed into 3D image slabs, which are then stacked in order to assemble the 3D image volume.

The assembly of the 3D image volume allows it to be displayed using any one of a number of rendering algorithms. These algorithms differ somewhat between software packages but can be divided into two general types: volumerendering algorithms and surface-rendering algorithms. In volume rendering, a

Computer-Aided Craniofacial Surgery

399

subset of voxels determined by the user is displayed by projecting these images into 2D pixel space, using a ray-casting algorithm. In volume rendering, the more external layers of selected voxels can be made to appear translucent by altering their opacity, visualizing the external and internal structure of an object of interest simultaneously. In surface rendering, the volumetric data must first be converted into geometric contours or wire-frame polygon meshes of the outer surface of the object of interest. This technique assumes that the voxels that compose the object of interest have already been identified. This process of identifying an object of interest is called segmentation, which will be discussed further below. These polygon meshes or contours are then rendered for display using conventional geometric rendering techniques and shaded using one of a variety of reflectance models (e.g., Phong, Gourand).

Each of these techniques has advantages and disadvantages. In surface rendering, the conversion step to the polygonal mesh surface can be computationally intensive. Such conversion steps are prone to sampling artifacts, due to decisions made in the algorithm regarding polygon placement, which secondarily may lose detail. However, once it is segmented and converted, the surface is fast to render, because only surface points are considered. In addition, surface rendering is often performed with the assistance of specialized hardware cards rather than in a purely software algorithm. In contrast, since the entire set of image data is considered at all times in volume rendering, the computer needs to consistently interact with all of the data during each new rendering. In volume rendering, no segmentation needs to be done prior to rendering, and any part, including internal structures, may be rendered and is always available for viewing.

Once the 3D volume has been assembled and visualized, several display tools are available to work with it. The simplest tool is rotation. The volume can be turned about any combination of the three coordinate axes. Clipping is another simple tool allowing the user to visualize a portion of the 3D volume constrained by a set of planes specified by the user. Volume reslicing can also be performed, allowing the user to visualize the data in slices parallel to any arbitrary userspecified plane.

22.2.3 Segmentation

One of the most common manipulations of the data is isolation of anatomical subunits for closer study or independent display. These subunits usually are specific bones or soft tissues, such as the mandible or the orbital contents. This process, called segmentation, divides the volume into subset collections of voxels, which are called objects.

One of the simplest tools for segmentation is thresholding. This is a method of instructing the computer to display only those voxels that contain Hounsfield numbers that meet user specified criteria. For example, in CT data, it is a fact

400

Kane et al.

that all soft tissues in the head have Hounsfield values less than a certain critical value, which is necessarily less than the Hounsfield value of bone. If we assume, for purposes of illustration, that this critical value is 90, then one can display only the bone by applying a threshold criterion requiring voxels to contain Hounsfield values greater than 90 in order to be retained and displayed. Thus, thresholding is a simple and efficient way of segmenting bone on the basis of similar gray-scale intensities of the desired object.

While segmentation of anatomical objects of interest can be done in many ways [4], three of the most common techniques will be reviewed: slice-by-slice region of interest outlining using cursor and mouse; coring out full-thickness pieces of the volume; and segmentation by connecting adjacent voxels. The first technique for segmentation is slice-by-slice outlining, in which a mouse is used to edit each slice of the volume in order to isolate the object of interest. This is the most time-consuming of the techniques, requiring repetitive user interaction with a series of slices that contain the object of interest. The second segmentation method is coring, which takes full-thickness pieces of the volume, based upon a user-defined trace upon a volume-rendered image, and assigns the cored piece to a new object, much like coring an apple. The coring procedure is repeated, outlining the object more closely from multiple different viewpoints and trimming away unwanted voxels. The third segmentation technique, called seeding, exploits the computer’s ability to connect all voxels attached to a ‘‘seed voxel’’ that have nonzero Hounsfield values within them. The computer has the ability to connect all voxels that are adjacent to an arbitrary seed voxel. This technique exploits the spatial arrangement of voxels that are contained within the object of interest.

Once the objects of interest have been segmented using these techniques, several tools are available for measuring quantities of interest pertaining to them. The volume of any object can be calculated by the computer. Since the volume of a single cubic voxel is known and the number of voxels in the object is known, calculating the volume is simply the product of the unit voxel volume times the number of voxels in the object. Since every voxel has known x,y,z coordinates, the computer can also calculate the distance between any two voxels or the angle formed by lines connecting any three voxels. The computer can also calculate 2D or 3D distances on any object. Although these distances may seem similar, 2D and 3D distances are not the same measurements. A 3D distance is analogous to what one would get by using a caliper on the 3D volume or by using a caliper on a sculpture. The 3D distance is the same no matter what perspective the object is rendered from, much as a fixed distance between two points on a sculpture is insensitive to how the sculpture is rotated on its pedestal. The 2D distance is different in that it is a projection of a 3D distance onto a 2D surface, and as such it is sensitive to the viewpoint from which it is rendered. (If one photographed

Computer-Aided Craniofacial Surgery

401

a sculpture from two different angles and measured the distance between two points on the pictures, the distances between the points would vary.)

Object segmentation is the basis for many of the applications of computers in craniofacial surgery. Objects of interest within the 3D image volume can be moved arbitrarily within the volume while maintaining the rest of the volume stationary. This type of procedure is particularly useful in surgical planning, which will be described in detail later in this chapter. There is no limit to the number of segmentations that can be performed on a single 3D dataset, nor are there limits to the ways in which such segmentations can be usefully displayed with different coloring and opacities applied. It is often challenging to find methods of segmentation that can be performed rapidly and with some automated assistance from the computer, rather than having to perform a manual slice- by-slice segmentation. Considerable work is being undertaken to find imageprocessing methods for automated and semiautomated segmentation of the anatomy of interest to craniofacial surgeons, as few surgeons are able to spend the time necessary to master the process of segmentation, although these tasks are becoming more simplified as the technology improves.

22.3PREOPERATIVE MODELING, PLANNING AND SIMULATION

22.3.1Rapid Prototyping Modeling Technologies and Applications

Computers have greatly facilitated the rapid creation of high-quality three-dimen- sional physical models. These models precisely reproduce the anatomy of interest to the craniofacial surgeon. The models may be used to plan osteotomies and hardware placement (e.g., prebending of reconstructive plates), during mock surgery or surgical simulation. Proponents of these models believe that the ability to simulate and practice the surgery may lead to decreased operative times and superior results, although it is difficult to support such claims quantitatively. The other use for such models is in the creation of prefabricated implants for reconstruction [5]. Implants can be directly created, in which case the model produced by the prototyping machine serves as the implant. In this instance, the implant may be made from glass fiber nylon or acrylic. Alternatively, the prototyping machine can produce a reverse model (a form or mold) into which semisolid biocompatible hydroxyapatite can be placed in order to give the desired shape to the implant.

In general, rapid prototyping technologies can reproduce any object that that can be analyzed through segmentation. In this way, the model is derived from base imaging data, which typically is a high-resolution CT scan. There is