Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / Computer-Aided Otorhinolaryngology-Head and Neck Surgery Citardi 2002

.pdf
Скачиваний:
223
Добавлен:
07.06.2016
Размер:
4.48 Mб
Скачать

134

Yanagisawa et al.

8.Yanagisawa E. Color Atlas of Diagnostic Endoscopy in Otorhinolaryngology. New York: Igaku Shoin, 1997.

9.Yanagisawa E. Atlas of Rhinoscopy—Endoscopic Sinonasal Anatomy and Pathology. San Diego: Singular/Thomson Learning, 2000.

10.Belanger AJ, Lopes AE, Sinard JH. Implementation of a practical digital imaging system for routine gross photography in an autopsy environment. Arch Pathol Lab Med 124:160–165, 2000.

11.Delange GS, Diana M. 35 mm film vs. digital photography for patient documentation: Is it time to change? Ann Plast Surg 42: 15–20, 1999.

12.Spiegel JH, Singer MI. Practical approach to digital photography and its applications. Otolaryngol Head Neck Surg 123:152–156, 2000.

13.Van As CJ, Tigges M, Wittenberg T, Op de Coul BM, Eysholdt U, Hilgers FJ. High-speed digital imaging of neoglottic vibration after total laryngectomy. Arch Otolaryngol Head Neck Surg 125:891–897, 1999.

14.Yanagisawa E, Christmas DA. The value of computer aided (image-guided) systems for endoscopic sinus surgery. ENT J 78:822–826, 1999.

9

The Neuroradiology Perspective on Computer-Aided Surgery

S. James Zinreich, M.D.

Johns Hopkins Medical Institutions, Baltimore, Maryland

9.1 INTRODUCTION

The evolution of various neuroimaging modalities has significantly improved both the diagnosis and treatment of central nervous system anomalies and diseases. The excellent visual display of brain and spine anatomy afforded by both computed tomography (CT) and magnetic resonance (MR) imaging has improved not only the identification and diagnosis of central nervous system pathology, but also the surgical interventions used to treat such pathologies. With software advances, digitized imaging information can easily be reconstructed to provide a three-dimensional display. The three-dimensional images can be viewed from virtually any orientation and segmented to display ‘‘hidden’’ areas. Furthermore, images obtained from various modalities can be superimposed upon each other to provide a functional understanding of both the morphology and underlying pathology of the evaluated area. All of this information can be made available to the referring clinician/surgeon for evaluation and treatment planning. Prior to surgery, such information may be easily studied to gain familiarity with the regional anatomy and establish a surgical approach as well as define the extent of a surgical procedure. Nevertheless, once in the surgical suite, a surgeon would still have to rely on the information assessed prior to surgery and could only be

135

136

Zinreich

guided by memory and the acuity of one’s vision. Unfortunately, in neurosurgical procedures, a person’s visual capability is unable to precisely delineate the boundary between normal and abnormal pathology, a boundary more sensitively displayed by MR imaging. Thus, the ultimate wish and need is for MR imaging (MRI) to be available within the surgical suite and able to provide guidance of the surgical procedure. This need established the field of frameless stereotactic surgery.

Perhaps the most significant neurosurgical advance that has surfaced within the past decade is the development and evolution of various frameless imageguided surgical systems. Computer-aided image guidance dramatically improves the surgeon’s visualization of the operative field both before and during surgery, thereby reducing both the invasiveness and potential morbidity of a host of neurosurgical procedures.

9.2EVOLUTION OF COMPUTER-AIDED SURGERY

The development of current computer-aided surgery (CAS) systems began with the pioneering work that resulted in frameless stereotactic image-guided neurosurgery. Although the concept of stereotaxy was introduced in the early part of the century, Spiegel et al. [1] were the first authors to describe a device that was used in humans that relied on a fixed frame relative to external landmarks. The advent of computed tomography enabled three-dimensional morphological maps of the brain, which in turn encouraged the increased use of stereotactic frames in surgical procedures. However, the frames are cumbersome (in some cases painful for the patient), and they often restrict the surgeon’s access to the surgical field. Positioning this instrumentation requires significant time and effort from the neurosurgeon, and recently third-party payers have not consistently reimbursed for this additional time.

Thus, the development of methods that would provide stereotaxy without the use of a frame was begun. During a neurosurgical procedure, the head is totally immobilized in a mechanical clamp (Mayfield frame). From a perspective of registration, the ‘‘fixation’’ of the anatomical area and the lack of pulsations and breathing motion made neurosurgical procedures ideally suited for the introduction of image guidance (also known as surgical navigation).

Basic navigational systems consisted of two components: (1) the sensor that integrates the patient’s position and morphological area of interest and (2) the information contained within the computer hardware/software. The sensor is used to co-register the patient with the patient’s information in the computer and subsequently is used as a ‘‘pointer’’ during surgery. The sensor tip is represented on the computer screen by a set of crosshairs, and therefore the tip of the ‘‘pointer’’ in the surgical field is easily tracked on the computer screen. This process represents a crucial interaction that includes the registration process as

Neuroradiology Perspective

137

well as a tracking process, which is used to track surgical instruments. A separate sensor can also be attached to the patient and is able to update the computer information with regards to patient motion.

Watanabe and colleagues [2] were the first to develop the ‘‘Neuronavigator’’ device, which consisted of a probe attached to a multijointed sensing arm capable of converting the motion of the probe tip into a digital signal. The Neuronavigator computer was able to interpret the relative position of the probe tip from this information; as a result of this process, the computer displayed the probe tip information (as cross hairs) on the computer monitor. Before surgery, fiducial markers were strategically placed on the patient’s head and face, and a CT scan was performed. The CT data were then entered into the computer system via a video digitizer. In the operating room, proprietary software was then used to combine the CT data with the actual position of the patient on the operating table. In an early report on 12 patients undergoing neurosurgical procedures, an average accuracy of 3.0 mm was reported [2].

Guthrie and colleagues [43,44] described a stereotactic neurosurgical operating arm, which was similar to Watanabe’s unit. In a report describing their experience with shunt catheter guidance, removal of a subdural hematoma, and resection of tumors, the reported accuracy was 4 mm. In 1989, stereotaxy evolved even further when Zinreich et al. [3] introduced the Viewing Wand image-guided neurosurgery system (ISG, Mississauga, Ontario, Canada), a smaller and more versatile unit that was quickly accepted by neurosurgeons.

9.3 REGISTRATION

Neurosurgical procedures have paved the way for the development of various registration techniques, and the evolution of these techniques paralleled the development of CAS. The two major types of registration techniques are the combined use of anatomical and surface points and the use of fiducial markers.

The anatomical landmark method of registration uses the patient’s distinguishable facial features as landmarks. With the Viewing Wand (ISG, Mississauga, Ontario, Canada), the viewing wand probe is placed on the lateral canthal area of the right eye, while the mouse is simultaneously used to place the crosshairs over the same landmark on the three-dimensional (3D) computer-generated image. The location of the crosshairs is then registered into the computer. The contralateral, lateral, canthal, and left and right nasal alar regions are similarly entered [4,5]. The probe is placed at approximately 40 additional locations, and the computer matches the patient’s facial contour to the identical contour pattern of the reconstructed 3D image. This provides a precise correlation of each point in the CT scanned volume with its corresponding x, y, z coordinate within the reconstructed information on the computer screen.

The registration process that uses external fiducial markers sends the precise location and orientation of the object to the computer that contains the 3D

138

Zinreich

representation of the anatomical region to be evaluated, so that the image of the probe tip on the computer screen displays the actual position of the probe tip on or in the object. When each registration marker on the patient is touched with the tip of the probe, the corresponding marker’s representation on the computer image is identified by a mouse-driven cursor on the screen (Figure 9.1). This process provides a point-pair file that contains the real space and image space locations for each of the markers or anatomical landmarks. To visually check the registration, widely separated anatomical points on the surface of the patient’s head are touched and the indicated locations on the computer screen are confirmed.

When compared with the anatomical landmark-surface fit registration method, fiducial marker methods are more accurate and reproducible for neurosurgical procedures. In one study, fiducial registration resulted in greater accuracy when measured by a computer comparison of the calculated position to the true position on the scan as indicated by the system operator. For CT scans obtained in 21 cases, the fiducial registration error was on average 2.8 mm, compared to 5.6 mm for the anatomical landmark-surface fit algorithm method [6].

After the registration process, the computer displays the root mean square (RMS) value. This value is an expression of the standard deviation for each individual point of the registration set compared to the whole. Clinical observations

FIGURE 9.1 Patient positioned in a Mayfield frame with external fiducial markers in place for the presurgical registration process.

Neuroradiology Perspective

139

have suggested that lower RMS values correspond to more accurate registration. Registration accuracy can be subsequently corroborated by placing the probe tip on the center of one of the markers on the patient’s face and visually checking its corresponding position on the computer screen.

One of the more recent advances in registration technique is the development of an automatic registration method for frameless stereotaxy, image-guided surgery, and enhanced reality visualization. Grimson et al. [7] reported on this system in 1996. Their method automatically registered segmented clinical reconstructions from CT or MR to provide the surgeon with a ‘‘live’’ view of the patient, thus enabling the surgeon to visualize internal structures before the actual procedure, as well as interactively viewing extra or intracranial structures noninvasively. The functionality and ease of use of this system is yet to be evaluated.

9.4 SENSORS

A critical aspect of the registration process is the sensor used to gather patient positional information for both the registration process as well as the tracking and updating of the surgical field. The evolution of sensor technology has been relatively slow in comparison with the development of graphical and computational workstations used in CAS. With the advance of high-performance, lowcost personal computers, software development has no longer been a restrictive factor in the advancement of CAS. The type of sensor used for tracking the surgical field and instrument is dependent on the procedure being performed; therefore, the sensor type may be considered procedure-specific. Early sensor technology was based on mechanical systems; more recently, optical and electromagnet sensor technology has been introduced.

9.4.1 Mechanical Sensors

Kogusi and Watanabe et al. were among the first to describe a mechanical articulated arm for position sensing (Figure 9.2) [8–13]. In this approach, mechanical sensors are articulated passive robotic arms that use various types of angle encoders at each joint. Optical encoders or potentiometers at each of the rotational areas pass information about the relative angles of the arm segments to the computer via analog to digital conversion. Surgical probes are attached to the arm at the last joint (Figure 9.3). Using the angles from the encoders in conjunction with known geometric information about the arm, a computer solves the spatial position and orientation of the device trigonometrically. Registration to the image set is accomplished by locating markers present on the patient during the CT scanning procedure and the operating room. This device provides minimal encumbrance for setup and use, with little imposition on the operative field. Other groups have also reported success with mechanical digitizing arms [14]. The

140

Zinreich

FIGURE 9.2 Laboratory model displaying the accuracy of a mechanical sensor prior to its introduction for use in neurosurgery.

FIGURE 9.3 Mechanical sensor configuration

Neuroradiology Perspective

141

accuracy and clinical experience of the articulated arm has been previously reported [15]. Due to its size, the mechanical arm has been clinically limited to cranial procedures. Attempts at using the mechanical arm for spinal procedures as well as endoscopic sinus surgery procedures were reported; however, the need to also be able to track body movement in these procedures precluded the use of this sensor [16]. Since most neurosurgical procedures continue to be performed with the patient’s head fixed in a Mayfield frame, the mechanical sensor continues to be used at several institutions. Its dependability and accuracy of approximately 2 mm meet the needs of the surgical procedure.

9.4.2 Optical Sensors

When a mechanical sensor is used for registration in neurosurgical procedures, in which the patient is not immobilized, an additional registration process must be performed during the procedure in the event that patient repositioning is necessary. Inadvertent patient movement also requires repeat registration. This problem has been addressed by the development of optical sensors, which are able to ‘‘track’’ potential motion and eliminate the need for additional registration steps during the procedure. These optical tracking sensors are quickly replacing mechanical sensor use in neurosurgery. This type of system uses dynamic frames of reference that track the movement of instruments used during surgery as well as motion of the operative field while the computer screen shows the registered anatomical area undergoing the surgical procedure. In these optical systems, strobing infrared light-emitting diodes (LEDs) mounted at known locations on the proximal tip of the surgical tools are triangulated by two or more infrared cameras. The cameras in turn are mounted on a boom approximately 1 or 2 m above the surgical field equidistant from one another (Figure 9.4). When compared with the mechanical sensor, this system is advantageous from the perspective that it is able to track the movement of several objects (several instruments as well as the patient’s body movement); however, the location of the camera system can be intrusive, and the camera system needs to be able to detect the flashing light from the LEDs (i.e., maintain direct ‘‘line of sight’’). Excessive lighting in the operating room may interfere with the camera’s detection of light bursts from the LEDs. Objects placed in the path of the camera’s detection also disrupt the tracking capability of the sensor.

During spinal surgery, a dynamic reference frame is used to track motion during surgeries that have the potential for patient movement. Surgical instruments equipped with LEDs are tracked in relation to the dynamic reference frame. In their experience with a similar system using LEDs mounted on a probe, Tebo et al. [17] reported accuracy rates comparable (95% of errors approximately 3 mm) to those obtained with conventional frame-based systems and with those of the mechanical arm–based system.

142

Zinreich

FIGURE 9.4 Optical sensor configuration

9.4.3 Electromagnetic Sensors

Any optical sensor system requires the maintenance of a line of sight between LEDs and the cameras above the operative field. This disadvantage led to the development of electromagnetic sensors, which were first applied to cardiac surgery (Figure 9.5) These latter sensors are able to track a nonlinear path, but they

FIGURE 9.5 Electromagnetic sensor configuration

Neuroradiology Perspective

143

are susceptible to environmental conditions (most notably the presence of metals). Their ability to ‘‘pinpoint’’ a particular site accessible with a catheter or a flexible endoscope is a considerable advantage, since other sensors are only effective when attached to a rigid, linear instrument or ‘‘pointer.’’

9.4.4 Characteristics of the Ideal Sensor

All sensor systems continue to be in use, as each approach to sensor technology addresses the needs and requirements of a specific surgery. A ‘‘universal’’ sensor is not available. The optimal sensor systems should have the following characteristics:

Miniature size ( 3 mm in diameter)

Easily incorporated with or attached to the surgical instruments Not interfere with the surgical field

Not require ‘‘special attention’’ from the surgeon during procedure Not traumatize and/or cause discomfort to the patient

Avoid the need of elaborate and expensive devices to keep sensors attached to the patient’s body

Reusable Inexpensive

9.5 IMAGING MODALITIES AND APPLICATIONS

Perhaps the paramount advantage to using CAS is its ability to afford the surgeon with a three-dimensional representation of neurosurgical anatomy during the surgical procedure. Conventional neuroimaging modalities (CT, MRI, ultrasound, fluoroscopy) alone can only provide a two-dimensional display of these areas. Thus, the only three-dimensional model the surgeon is able to create is a mental one based on his or her interpretation of two-dimensional images. Current computer hardware/software algorithms are able to provide the surgeon with frequent 3D reconstructions of CT and MRI information about patient anatomy before and during surgery (Figure 9.6). The combination of this advanced computer technology with novel radiographic innovations has enabled the integration of near real-time, high-contrast, and high spatial resolution volumetric images with frameless stereotactic, interactive localization methods [18].

Because each imaging modality provides a unique display of patient anatomy, a full spectrum of radiographic techniques is utilized in CAS procedures. No imaging modality displays every aspect of patient anatomy perfectly—CT, MRI, fluoroscopy, endoscopy, and ultrasound each has advantages and disadvantages in terms of contrast resolution of soft tissue vs. bone, spatial orientations, fields of view, image displacement, and section thicknesses. The technical, economic, and safety factors associated with the practical implementation of each