Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / Computer-Aided Otorhinolaryngology-Head and Neck Surgery Citardi 2002

.pdf
Скачиваний:
226
Добавлен:
07.06.2016
Размер:
4.48 Mб
Скачать

464

Citardi

The specific technologies in this chapter represent a wide range of technological development. Although all of them are in the engineering phase, some of them are beginning to enter the clinical arena. These technologies, which are still in their early stages, will doubtlessly undergo significant refinement.

Again, it must be emphasized that the point of this chapter is not to project a vision of the future of CAS within otorhinolaryngology–head and neck surgeries. Instead, the emphasis is upon developmental technologies, which offer significant promise, but have not yet been widely accepted.

25.2VOLUME RENDERING TECHNOLOGY AND PERSPECTIVE VOLUMETRIC NAVIGATION

Computer-based, three-dimensional modeling based on raw axial magnetic resonance imaging (MRI) or computed tomography (CT) data has become widely available. Current systems offer volume-rendering protocols that can build threedimensional (3D) models with a high degree of fidelity. Models based on volume rendering offer maximal opportunities for creating multiple views of the same data set, since more of the initial data is incorporated into the model. The major limitation has been that these models are much more complex to create and manipulate, since the amount of incorporated data is so great. Fortunately, speedier and less costly microprocessors, computer memory, graphics cards, and display systems have made volume-rendered models a feasible option.

The CBYON Suite (CBYON, Palo Alto, CA; www.cbyon.com) incorporates sophisticated volume rendering protocols (known as Dynamic Data Filtering ) into a CAS package that also includes standard surgical navigation features. This software offers enhanced options for viewing the models. Surgeons can melt away various tissue layers and fly through the area of interest (Figure 25.1). The CBYON Suite also permits the creation of short ‘‘movies’’ of fly-through paths of the models (Figure 25.2) (Movie 25-1, Movie 25-2, Movie 25-3). The models are sharp and clear, and the system performance (i.e., frame refresh rate) is remarkably fast. The CBYON CAS platform, which was introduced in 2000, offers capabilities that were previously only attainable in specialized computer laboratories for routine use in the operating room.

The CBYON software also can display the three-dimensional volume renderings from the perspective of a surgical instrument so that the view of the model from the tip of the instrument during actual surgery can be visualized. This technique, called Perspective Volumetric Navigation (PVN), incorporates positional information obtained from standard surgical navigation with features of virtual endoscopy. Finally, virtual endoscopic view provided by PVN can be temporally and spacially synchronized with intraoperative endoscopic images through Image-Enhanced Endoscopy (IEE). IEE combines so-called virtual endoscopy based upon perspective volume rendering with the real world endoscopic

Future Directions

465

FIGURE 25.1 The CBYON Suite (CBYON, Palo Alto, CA) uses volume rendering for the creation of three-dimensional models. Surgeons may melt away layers through Dynamic Data Filtering , which uses image data filters. In this still image, the overlying soft tissue has been rendered transparent in the left half of the model, the corresponding bone has been rendered partially transparent, and the intracranial vasculature has been rendered in red. (Courtesy of CBYON, Palo Alto, CA.)

view (Figure 25.3) (Movie 25-4). In order to accomplish this, the CBYON system tracks the position of the rigid endoscope tip, and can reconstruct the virtual view (from the preoperative CT or MRI data) that corresponds to the view through the endoscope. The net result is that the real world endoscopic view and the virtual endoscopic view are coregistered.

PVN and IEE can provide significant orientation information. Although the rigid endoscope provides offers a bright view, blood and other surgical debris can obscure that view. Furthermore, the endoscopic view is only a two-dimensional representation of complex three-dimensional anatomy; as a result, perceptual distortion is common. To the extent that the PVN provides additional information about orientation and localization, it will simplify endoscopic visualization. In doing so, PVN may permit the development of endoscopic techniques for procedures that currently require open approaches for the maintenance of surgical orientation.

PVN adaptations also may have significant impact on endoscopic sinonasal surgery for inflammatory sinonasal diseases. In these procedures, surgical naviga-

466

Citardi

FIGURE 25.2 This CBYON Suite (CBYON, Palo Alto, CA) supports the creation of ‘‘fly-by’’ and ‘‘fly-through’’ movies of three-dimensional models created from CT and MR images. This still image capture depicts a snap shot of the trajectory of a movie of an intracranial aneurysm. (Courtesy of CBYON, Palo Alto, CA.)

FIGURE 25.3 In Image-Enhanced Endoscopy , the CBYON Suite (CBYON, Palo Alto, CA) matches the corresponding views of the real surgical anatomy (i.e., the view provided by the endoscope) and the virtual three-dimensional model (created by the computer software). This still image capture shows the synchronization between the virtual (upper left panel) and real view (upper right panel). (Courtesy of CBYON, Inc., Palo Alto, CA.)

Future Directions

467

tion often provides information about the boundaries of the paranasal sinuses. In this way, the computer system guides the surgeon away from the intracranial space, the orbital space, the optic nerve, etc. In contrast, surgical navigation more commonly provides guidance to reach a target. For most sinus surgery, the objective is target avoidance, which helps the surgeon avoid potentially catastrophic complications. Although PVN clearly supports targeting capabilities, it can also support antitargeting; that is, the CBYON system can highlight specific structures that must be avoided (Figure 25.4). This information can be displayed on the perspective volumetric reconstruction (the virtual view), which actively correlated with the real view provided by the telescope.

FIGURE 25.4 The CBYON Suite ’s Image-Enhanced Endoscopy (IEE) may provide better appreciation of three-dimensional anatomy and can support both tracking to a target as well as tracking away from a critical structure. In this instance, volume-rendering protocols were used to create a three-dimensional model of the optic nerves of a patient, whose sphenoid sinuses had pneumatized around the optic nerves. In this case, the optic nerves were not a surgical objective. The patient was undergoing revision image-guided functional endoscopic sinus surgery for recurrent acute rhinosinusitis of the frontal, ethmoid, and maxillary sinuses; as a result, entry into the sphenoid sinuses was not deemed necessary. This still image capture depicts the view provided by the telescope (upper right panel) and the corresponding view of the virtual model (upper left panel). The virtual model shows the optic nerve (in green in the image on the CD-ROM), but the optic nerve cannot be seen in the standard telescopic view. In this way, IEE provided anatomical information that was supplementary to the view afforded by the nasal telescope. The lower panels depict the relative position of the tip of the suction (seen in the real endoscopic image).

468

Citardi

The clinical success of PVN will require robust system performance. The computer-generated views must update rapidly, and the view must be sufficiently detailed. In addition, the registration of the view of virtual space (created by the computer) with the view of real space (created by the telescope and camera system) must be very tight. Small discrepancies will introduce unacceptable errors that will degrade surgical safety and efficacy.

25.3SURGICAL ROBOTICS

Over the past several decades, practical robotics has moved from the realm of science fiction to the commercial sector, where robotics are commonly employed in manufacturing tasks that require a high level of precision. More recently, robotics has been adapted for use in the operating suite. Although the technology for manufacturing robotics and surgical robotics is quite similar, important differences are also apparent. In factories, robots perform rote functions with relatively little human oversight and intervention. In contrast, surgical robotics requires a skilled human operator (the surgeon) at all times. A surgical robot does not perform its tasks as an automaton. Instead, the surgical robot operates at the interface between the surgeon and the patient.

In operating rooms equipped with surgical robotics, the surgeon controls each action of the robot, which can perform the maneuvers with greater precision than a human surgeon alone. Robots do not have tremors, and they do not fatigue. Furthermore, they can maintain uncomfortable postures indefinitely, unlike even the most ardent human surgeon and assistants. Robots can also be programmed to compensate for patient movements such as respiration and circulation. Without robotics, the surgeons must take compensatory actions; with robotics, the surgeon focuses on only those actions that are necessary for achieving the surgical objective, while the robot can add or subtract additional movements that reflect the inevitable motion of the operative field (due to factors such as respiration and circulation).

Computer Motion (Santa Barbara, CA; www.computermotion.com) has designed three surgical robotic systems. HERMES provides a voice-activated interface that can control medical devices that have been connected to the system. AESOP, which is also voice-activated, uses a robotic arm to position laparoscopic instruments, such as the rigid endoscope. This robot replaces the human assistant who ‘‘drives’’ the camera during laparoscopic procedures. AESOP reduces the personnel needed for these cases, and it can hold the instrument indefinitely in the position selected by the operative surgeon. Finally, the ZEUS robot can perform laparoscopic procedures under the surgeon’s immediate control (Figure 25.5). For this system, the surgeon sits at a console, from which he or she can direct three robotic arms, which execute the commands entered by the surgeon (Figure 25.6). The ZEUS arms can perform precise movements without fatigue, tremor, etc.

Future Directions

469

FIGURE 25.5 The ZEUS system from Computer Motion (Santa Barbara, CA) includes three robotic arms for the manipulation of surgical instruments. ( 2001 Computer Motion. Photograph by Bobbi Bennett.)

FIGURE 25.6 In surgical robotics, the surgeon directs the actions of the robot from a control panel that is separate from the immediate operating field. ( 2001 Computer Motion. Photograph by Bobbi Bennett.)

470

Citardi

FIGURE 25.7 Surgical robotics requires placement of the robotic arms so that they have the direct access to the surgical field, and the surgeon’s control station is placed out of the sterile field. In this example, da Vinci Robotic Surgical System (Intuitive Surgical, Inc., Mountain View, CA) is shown. The typical set-up for an operating room equipped for surgical robotics is shown. ( 1999 Intuitive Surgical, Inc.)

Other venders have also introduced robots for the operating suite. Intuitive Surgical (Mountain View, CA; www.intusurg.com) produces da Vinci Robotic Surgical System (Figure 25.7). Like ZEUS, da Vinci includes a surgeon’s console from which the surgeon can control the movements of the robotic arms. Surgeons are using both ZEUS and da Vinci in operating rooms throughout the world.

25.4AUGMENTED REALITY

Several years ago, the concept of virtual reality (VR) was introduced to the general public. In VR systems, users enter a completely synthetic experience generated by computers. In true VR, computers create visual, audio, and haptic stimuli and cues that together emulate a real world experience, and users cannot perceive any elements from the real world. Although VR technology has enormous potential for entertainment applications, its medical usefulness may be limited, since VR creates a substitute reality. Surgeons are more interested in accurate and precise representations of real world anatomy. As a result, the surgical importance of VR rests upon the integration of its core components with relevant physical

Future Directions

471

features. Augmented reality or tele-immersion seeks to superimpose three-dimen- sional computer-rendered images upon the actual surgical field. The objective of this approach is to eliminate the need for standard view boxes and cathode ray tube (CRT) displays; instead, all relevant imaging data will be presented directly to the surgeon in the operative field.

How can this be achieved? One approach is to move from CRTs that are remote from the operative field; rather, the display system may be directly incorporated into the equipment in the operative field. Of course, standard CRTs are large and cumbersome, and even liquid crystal displays (LCD), LCD projectors, and plasma screens are probably too large to be successfully integrated into the operative field.

Head-mounted display (HMD) units may represent a potential solution. Current HMDs incorporate traditional LCD displays into a headset that the surgeon wears during the procedure. Since HMDs provide binocular information, they can be used to created stereovision. Unfortunately, early HMD units have been less than optimal. Some surgeons have noted perceptual distortion and disorientation, and these sensations may even lead to eyestrain, headaches, and nausea. In addition, the HMD resolution has been poor and the image sizes are typically small. These factors create the impression of viewing a grainy image at a distance through a narrow tube—clearly not an optimal image for surgical applications.

Retinal scanning display (RSD) technology, which incorporates a small, low-power image to directly ‘‘paint’’ an image on the user’s retina, may overcome the limitations of current HMD systems. Monocular, monochromatic RSD technology has been used to convey relevant information to pilots who fly certain military helicopters. In this military application, pilots can perceive targeting and other data while viewing the real scene through the cockpit window. The incorporation of binocular, full color RSDs into an HMD may represent a viable approach for clinically useful augmented reality and tele-immersion. Microvision, Inc. (Bothell, WA; www.mvis.com) is actively developing RSD-based HMD units.

25.5FINAL COMMENTS

The final direction of the paths created by volume-rendering protocols, surgical robotics, augmented reality, and other technologies cannot be fully anticipated. New technology will inevitably grow and evolve in ways that are impossible to predict, and surgeons will creatively apply these new tools to solve relevant surgical challenges. The only safe assumption in this process is that CAS technologies will provide a means for greater surgical precision and decreased morbidity. In this way, surgeons will utilize CAS so that better patient outcomes can be achieved.

Index

3D Navigator, 335

A

Aachen group, 20–22, 187 AESOP, 468

Agger nasi cell, 247–249 ANALYZE, 405

Anterior cranial base surgery (see Sinonasal tumors and anterior cranial base)

Augmented reality, 425–427, 470–471

B

BrainLab (see VectorVision)

C

Calibration, 9, 32, 54–55

CAPS, 424–425

CAS (see Computer-aided surgery) CAVE, 435–436

CBYON (see also SAVANT), 67–68, 197, 464

Cephalometrics, 377–380 Computer-aided surgery (see also Cra-

niofacial surgery, Facial plastic surgery, Maxillofacial fractures, Soft tissue surgery, Software-enabled cephalometrics, Surgical navigation, Surgical simulation, Registration, Tumor modeling, Virtual endoscopy, Virtual reality)

accuracy, 152–153

advantages, 191–192, 283–284, 346– 349

anterior cranial skull base, 277–292 augmented reality, 425–427, 470–471 CAS systems, 446–449

CT, 145–148 cost, 196 definition, 4

473

474

[Computer-aided surgery] disadvantages, 192, 283–284, 346–349 endoscopy, 145

error, 69–71, 153

FESS (see also IG-FESS), 185–198, 226–227

frontal sinus mucocele, 257–260 frontal sinus surgery, 243–261 future developments, 273, 349–353,

431–438, 458–460, 463–471 history, 15–28, 136–137, 187–188,

226–227

image-enhanced endoscopy, 464–470 image resolution, 151–152

imaging modalities, 143–150 indications, 188, 219–220 lateral frontal sinus lesions, 260

limitations, 220–221, 240, 456–458 MRI, 148–151

new paradigm for, 11–13

otology and neurotology, 297–309 osteoplastic flap, 260–261 perspective volumetric navigation,

464–468

revision sinus surgery, 223–241 scope of, 5

sinonasal tumors, 277–292 surgical robotics, 468–470 terminology, 4–5, 28 transsphenoidal hypophysectomy,

263–273 ultrasound, 145

x-ray fluoroscopy, 145 Computer Motion (see AESOP,

HERMES, ZEUS) Craniofacial surgery, 395–420

case examples, 410–416

data acquisition and processing, 396– 397

philosophy, 407

preoperative planning, 404–407 rapid prototyping, 401–404 rendering, 398–399 segmentation, 399–401 surgical navigation, 416–419 techniques, 408–410

Index

D

da Vinci Robotic Surgical System, 470 Digital imaging, 117–133

advantages, 126–127

clinical applications, 128–131 color printers, 124–125 compression, 118–119 computer, 122

digital cameras, 119–122 digital video, 124 disadvantages, 127

facial plastic surgery, 363–366 fundamentals, 118

image capture systems, 131–133 LCD projectors, 125–126 presentations, 366–368 software, 122

storage media, 122–123 techniques, 119

Dynamic Data Filtering, 52, 451, 464

E

Electromagnetic tracking, 32, 39–42 accuracy, 45

expense, 45

operating room time, 44–45 patient headset, 44

patient selection, 46 signal distortion, 43–44

surgical instrumentation, 45

vs. electromagnetic tracking, 31–46 Encephalocele, 290

Endoscopic sinus surgical simulator, 107–113

Expert systems, 103–107, 423–424

F

Facial plastic surgery, 361–375 applications, 372–374 digital imaging, 363–366