Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / Computer-Aided Otorhinolaryngology-Head and Neck Surgery Citardi 2002

.pdf
Скачиваний:
226
Добавлен:
07.06.2016
Размер:
4.48 Mб
Скачать

432

Rosen and Simpson

(A)

(B)

FIGURE 23.4 Computer-aided modeling can present three-dimensional information about complex traumatic deformities. (A) The model highlights the missing mandibular segment. (B) The model has been manipulated so that a close-up view of the defect is shown. (Copyright 2000 Medical Media Systems, Inc. with permission.)

Computer-Aided Soft Tissue Surgery

433

decisions. Reconstruction options include the choice of donor site, tissue types (i.e., bone, muscle, skin, etc.), postoperative rehabilitation, and follow-up protocols. Of course, the surgeon wishes to select the ideal option at each treatment point so that the optimal outcome is assured for the patient. Thus, for a surgery involving multiple decisions, numerous outcomes, all of which have a great personal impact on the patient, are possible. A better approach is necessary. A potential solution stems from work done by Mann in the 1960s [16].

Mann’s vision was to create a virtual patient-specific model of the patient, procedure, prostheses, and rehabilitation. Operating within a virtual environment, the physician would pursue several different alternatives approaches. The patient would then undergo rehabilitation for a period of several years. The physician would subsequently analyze the model to determine which surgical procedure would lead to the best outcome. Once this was accomplished, the surgeon could choose which complex procedure should be performed in reality and carry it out [17]. The first steps toward this vision have been realized. Further research needs to be done in a number of areas that will be described below.

To realize the above vision, several hurdles need to be crossed. Most importantly, a precise and specific patient model needs to be developed so that it accurately represents the anatomy and physiology of the particular patient. While current patient-specific models may be used for sizing and preoperative and intraoperative planning, these models cannot presently demonstrate tissue change over time. The ideal virtual model needs to incorporate a wealth of data about the patient, including both normal anatomy and pathology as well. If the patient suffers from a degenerative disease, the model must also incorporate information about how surrounding tissues are changing irrespective of the course of operations chosen. It is even more difficult to predict wound healing and subsequent rehabilitation. Software presently exists that simulates some of these processes; additional mathematically based modeling paradigms need to be developed so that outcomes in all of these parameters can be predicted.

At a minimum, surgical simulators must fulfill three specifications [18]. Following from the above, the first need is for an accurate and comprehensive virtual human model. Ultrasonography (USG), CT, and MRI provide twodimensional imaging that may be manipulated by volumetric mathematical computation to provide an anatomically accurate patient-specific 3D model; however, even this is not enough. A fourth dimension, showing function and predicted healing operative parameters, must be added. More than simply accurate enhanced visualization, true simulation requires solving the hard questions of tissue change, function, healing, etc. Second, surgeons require accurate and convincing virtual instrumentation and tools. True reproduction of the sense of touch is an incredibly difficult task, since touch encompasses proprioception, vibration, texture, temperature, kinesthesia, and pressure. To perform surgery in a virtual environment the surgeon must have accurate feedback for incising and detecting ab-

434

Rosen and Simpson

normal tissue. Additionally, virtual instrumentation and tracking are required as well. The third requirement is that the virtual environment have demonstrable effectiveness in terms of outcomes, surgeon’s acceptance, and physician teaching in a cost-effective way. A virtual environment system consists of the operator, the display interface, and the computer simulator. The user must feel completely immersed within the computer representation for acceptance in the medical fields. True surgical simulators must reflect all of these features.

In order to anticipate future developments, it is important to assess the current situation. Patient-specific, anatomically mapped models provide our departure point. To expand the utility of these models, they need to incorporate the parameters of tissue change. Currently, the surgeon can visualize patient-specific morphology in three dimensions. In addition, there is some ability to fuse imaging data intraoperatively with real tissues through data fusion protocols. Yet what is ultimately needed is the ability to simulate soft tissue change over time before, during, and after the procedure. Work at several institutions has begun to provide solutions for this challenge.

The National Library of Medicine (NLM) has sponsored The Visible Human Project, which has yielded complete, anatomically detailed, threedimensional representations of the normal male and female human bodies. In this effort, transverse CT and MRI were performed on a representative male and a representative female cadaver, which were then cryo-sectioned at 1 mm and 1/3 mm intervals, respectively [19,20]. These data support the reconstruction of models that are the most accurate of all available models.

The VR human model requires accurate patient-specific data to be mapped to the tissue, organ, system, and body region from CT, MRI, and USG imaging. This requires volumetric encoding with reference to an absolute reference frame independent and exterior to the patient. Mathematical algorithms can define a finite element mesh. This allows a computer to model how distortion of one set of elements will affect a second. The behavior of tissues and organ systems over time is an important future research question [21]. This information must then be incorporated into the mathematical algorithms that underlie FEM.

Surgeons require precise virtual instrumentation for VR to be widely applied to medicine. Many of the virtual tools for the user interface are available today. Current systems include the MIT Newman Laboratory joystick, which simulates forces on an instrument while held in the user’s hand [22]. However, the conversion of texture from the 3D virtual model to actual touch perception is presently a research challenge. Microelectromechanical systems (MEMS) employ computer chip fabrication technology with mechanical components to create miniature sensors for pressure, acceleration, and fluid flow. By combining computer chip technology with sensors and actuators, MEMS promises future progress as more mechanical functions can be matched to advances in mirocomputing.

Computer-Aided Soft Tissue Surgery

435

This should be expected to impact future haptic research [23]. These force feedback devices transform information from the virtual patient model into a realistic sense of touch to the surgeon. Alhough these devices have improved recently, they lack the resolution needed for a surgeon to fully experience and learn from operating in a virtual world.

Many instruments in the operating room need to be tracked by the virtual environment. These instruments can then provide accurate guidance during the actual surgical procedure. For instance, a surgical probe may show a green indicator light when the tip of the probe has reached its intended target, but the indicator light would shine a red signal at all other times. Similarly, this system could also show trajectory information. If the angle or position of the probe shifted from the planned trajectory, the probe would not display the green indicator light. This approach can also work to signal to the surgeon the proximity of certain structures. For example, during tumor ablation procedures, the instrument’s indicator would show a red light as the boundaries of the planned ablation are approached. In this way, adjacent normal tissues may be spared unnecessary trauma. In addition, the instrument may be attached to a robotic arm that is programmed to increase resistance to movement as certain predefined boundaries are approached.

Tracking of instruments may be done with real-time imaging and processing (such as x-ray fluoroscopy or USG); alternatively, external sensors may be attached to instruments. Optical, electromagnetic, and ultrasonic sensors can provide continuous spatial localization. This technology is useful in compensating for human limitations of hand positioning (approximately 200 m), intention tremor, and eye saccade motion [24]. The Hunter telepresence system for ophthamological surgery tracts the motion of the eye such that 1 cm of hand motion equals 10 m of laser movement. Video images magnify retinal vessels to the size of fingers and digital signal processing and filtering remove hand tremor. By using these techniques, the accuracy is improved from 200 to 10 m. The ophthalmologist will calibrate his movements such that he can target single retinal cells. Additionally, because the patient-specific data is accurately represented in a virtual environment, any sort of data, including but not limited to added visualization keys, surgical plans, and various operative data, can be fused in that environment.

Finally, the need for true-to-life virtual operating environments must be addressed. The University of Illinois has developed the CAVE system in which 3D images are projected in an 8 foot cubic room that permits physicians to walk between images. Holographic imaging is another area of investigation [25]. In theory, this type of environment may serve as a virtual meeting place in which surgeons from multiple locations may join in virtual space to perform a difficult operation. Although this approach is now possible, the resolution of these sense of touch and vision is still limited. It must be emphasized that the detail necessary

436

Rosen and Simpson

in touch is unique to surgery. On the other hand, the vision component is being actively addressed in other industrial fields. Additionally, the resolution of HMDs is currently nearing television-like quality [2]. Several institutions are researching alternative visual systems. Moreover, other research efforts are pursuing a virtual operating tray, from which a surgeon can choose virtual instruments used to manipulate and repair soft tissue and bone trauma [26].

Initial efforts in the construction of models focused on accurate reproduction of structures of interest. Even virtual models emphasize this type of accuracy. However, all of these models are merely static. True virtual reality in medicine must be four dimensional; that is, the models must also incorporate tissues changes as function of time. Current haptic interface technologies primarily sense pressure; for virtual reality, they must also convey greater dimensionality. Furthermore, the user interfaces for these systems must be optimized so that surgeons may effectively employ them in the clinical setting. Finally, virtual environments should also support collaboration among multiple physicians, even if they are separated by great distances.

The technological advances that are driving CAS are also changing the means of communication among physicians. By using electronic information and communication technologies, surgeons can practice at a distance from the patient [21]. Local practitioners may be able to coordinate care with distant clinicians to improve decision making for patients. In addition, long-distance medicine and VR will change the way that surgeons maintain their credentials and obtain new skills.

A surgeon may use telemedicine capabilities in numerous ways. Telesurgical clinics in remote or poor areas may consult with colleagues anywhere in the world via a global communication network. Currently, this system includes a high-resolution digital camera and a desktop personal communicator. Local physicians can then summarize relevant clinical information, including images, and send this clinical report to consultants via e-mail worldwide. The consultation is routed to the appropriate physician, who then initiates correspondence and provides expert advice on how to treat the patient. In this way, physicians may draw upon clinical resources that are not locally available. Although many institutions have constructed high bandwidth networks for telemedicine, such network capacity is limited in more remote areas. As a result, real-time video transmission is not practical. Similarly, operating remotely via robotic surgery systems is not yet possible, since the available bandwidth (cable or satellite) is not fast enough to accommodate the delay from the movement of the surgeon’s hand to the robotic arm operating on the patient. Consequently, the low bandwidth solution, based on sequential asynchronous e-mails, has emerged as a viable alternative. E-mail communications, coupled with digital photography, create a means for low-cost telementoring and teleconsultation and can improve decision making. In this way

Computer-Aided Soft Tissue Surgery

437

the expertise of a tertiary referral center can influence the quality of care on a worldwide scale.

Eventually, telesurgery will allow surgeons from multiple locations to reunite in one virtual operating space. Consider the case of a patient with a large lesion that covers almost all of her left cheek. The operative team removes the lesion quickly and then reconstructs the defect. A real model, based on highresolution preoperative imaging, will support preoperative planning. Such models ultimately will permit the visualization of blood vessels and tissue flaps in virtual space, although that objective has not yet been realized. The creation of multiple surgical plans will yield the optimal choice for the specific case. Postoperative results will also be predicted with complex mathematical algorithms, which model complex tissue interactions over time. During the procedure, the operating surgeon may call upon several experts from remote sites for consultation via live video. Eventually, consultants at distant sites may join the operative team in a virtual operating environment. The presence of the consultant surgeon will be manifest through robotic arms that exactly carry out movements initiated by the consultant at a distant location. In this manner, technology will enable surgeons to operate in multiple spaces at one time.

A number of U.S. clinics make use of real-time video transmission for telementoring. An experienced plastic surgeon may remotely telementor a colleague through a complex trauma case requiring immediate attention. Surgeons have evaluated and certified skills of a student surgeon for laparoscopic hernia repairs and other teaching situations [27]. The future of VR for training and procedural testing starts with the categorization of surgical procedures into the components of skills, knowledge items, tasks and subtasks. VR can make this educational process lifelike, variable, and real time capable.

The application of VR in the clinical setting faces some important limitations. First, it is too difficult to simulate the complexity of an entire surgical procedure. The specific situations where VR is most helpful have not been defined. Many questions regarding licensing requirements for practicing medicine with telesurgery technologies, have not been answered. Technological failures and crashes are inevitable, and backup plans must consider possible communication breakdown. The performance of surgery from sites remote from the operating room is limited by current technological capacity, since data transmission over 200 miles via cable or over 50 miles via wireless systems produces sufficient lagtime delays that produce unacceptable effects on coordination. Similarly, satellite transmission produces similar lag-time effects and cannot be used for this specific application.

VR is predicted to very soon play a critical role in credentialing surgeons [27,28]. With its power to allow training and testing in any procedure, it will serve as an objective tool to measure competence, just as it is used in the airline

438

Rosen and Simpson

industry. It will offer the additional advantage of avoiding the use of animal laboratories or patients to improve surgical skills [29]. VR for teaching and credentialing is an important area for further research. Flight simulators are costeffective and proven means to train pilots and maintain skills. This technology is now aiding medicine and surgery to train and assess professional skills [30]. There has been mounting concern that traditional continuing medical education (CME) courses that utilize didactic lectures do not improve physician performance. Interactive CME, alone, or combined with traditional didactic instruction, allows an opportunity to practice skills and can change physician performance [31]. VR will become the natural progression of these methods for teaching and CME. With the SRI System (SRI International, Menlo Park, CA), the surgeon views a 3D image from a minimally invasive procedure, which portrays organs and instruments as if the operative field was fully open. The surgeon sits at a console, in or outside the operating suite, while an assistant stays with the patient and receives tactile feedback from the instrument tips. A number of animal procedures have also been demonstrated; they include gastrostomy and closure, gastric resection, bowel anastomosis, liver laceration suture, liver lobe resection, splenectomy, aortic graft replacement, and arteriotomy repair [21]. Thus, it is clear that VR will play a critical role in various medical applications.

Younger surgeons will be more adept at learning these technologies than their more senior colleagues. Learning laparoscopic or endoscopic surgery requires a decoupling of the oculo-vestibular axis from the tactile-proprioceptive axis so that the surgeon may manipulate the controls or instruments. Younger surgeons tend to be more capable of making this switch due to their experiences with video games and computers.

23.5. CONCLUSION

The critical steps to realizing the vision presented in this chapter involve significant development in the fields of human models, interface devices, and system verification. Human modeling poses the greatest challenge, since it will require several generations of improved computer mathematical algorithms to accurately represent normal tissues and pathological conditions as well as the changes in normal and pathological tissues as a function of time. Interface tools, whether haptic or visual, will continue to evolve with the help of many industries that also require improving this technology. For system verification, the ability of VR systems to reproduce the perception of true reality must be conclusively demonstrated and the positive impact of CAS-based surgical planning and VR-derived educational experiences must be objectively confirmed. System verification is, of course, necessary for widespread adoption of these technologies by practicing surgeons.

Mathematical modeling of complex tissue interactions will dramatically

Computer-Aided Soft Tissue Surgery

439

alter the practice of medicine over the next 50 years. The software that results from better computational paradigms that can predict tissue interactions and outcomes will improve the diagnostic processes and therapeutic processes of health care. A generalized approach that creates a flexible model of the human body will let us superimpose information from various sources on a model of a specific patient. Ultimately, we will attain a higher standard of patient care through these advanced technologies.

The specific applications of CAS presented in this chapter are just the beginning. In the not-so-distant future, surgeons will be able to predict the outcome of therapies for a given patient, rather than predict the outcome based upon accumulated data from other patients in the population. This approach will ultimately lead to patient-specific therapies that provide a greater likelihood of success with a lower risk of morbidity.

Note: Although we have illustrated these cases with a few figures, the true beauty of these methods is the ability to manipulate the patient data set on a computer in an interactive way. We can provide to interested readers these data sets on a CD that supports interaction between the data and user (contact Joseph.Rosen@hitchcock.org). In this way, readers can better understand the value of this approach.

REFERENCES

1.Limberg AA. The Planning of Local Plastic Operations on the Body Surface: Theory and Practice. DC Heath and Company, Lexington, MA, 1984.

2.Constantian MB, Entrenpries, C., Sheen JH. The expert teaching system: a new method for learning rhinoplasty using interactive computer graphics. Plast Reconstr Surg 79:278, 1987.

3.Mattison RC. Facial video image processing: standard facial image capturing, software modification, development of a surgical plan, and comparison of pre-surgical and post surgical results. Ann Plast Surg 29:385, 1992.

4.Chen DT, Zelter D. Pump it up: computer animation of a biomechanically based model of muscle using the finite element method. Comput Graphics 26:89–98, 1992.

5.Pieper S. More than skin deep. Unpublished master’s thesis. Massachusetts Institute of Technology, Cambridge, MA, 1989.

6.Pieper S. CAPS: computer aided plastic surgery. Unpublished thesis. Massachusetts Institute of Technology, Cambridge, MA, 1992.

7.Pieper S, Rosen J, Zeltzer D. Interactive graphics for plastic surgery: a task-level analysis and implementation. In 1992 Symposium on Interactive 3D Graphics. ACM, New York, 1992.

8.Pieper S, Chen D, et al. Surgical simulation: from computer-aided design to computer aided surgery. In Proceedings of Imaging. Monaco, OCM, 1992.

9.Pieper S, McKenna M, Chen D. Computer animation for minimally invasive surgery:

440

Rosen and Simpson

computer system requirements and preferred implementations. In SSPIE: Stereoscopic Displays and Virtual Reality Systems—The Engineering Reality of Virtual Reality. SPIE, Bellingham, WA, 1994.

10.Pieper SD, Laub DR, Jr, Rosen JM. A finite-element facial model for simulating plastic surgery. Plast Reconstr Surg, 96(5):1100–1105, 1995.

11.Pieper SD, Delp S, Rosen JM, Fisher S. A virtual environment system for simulation of leg surgery. In SPIE—The International Society for Optical Engineering, Stereoscopic Displays and Applications II. SPIE, Bellingham, WA, 1991.

12.Fuchs H, Livingston MA, Raskar R, et al. Augmented reality visualization for laparoscopic surgery. In First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI ’98). Massachusetts Institute of Technology, Cambridge, MA, 1998.

13.DiGioia AJ, Jaramaz B, Colgan BD. Computer assisted orthopedic surgery. Clin Orthop, 354(Sept):8–16, 1998.

14.DiGioia AJ, Jaramaz B, Colgan BD. Computer assisted orthopedic surgery. Clin Orthop 354(Sept):8–16, 1998.

15.Rosen JM. Advanced surgical techniques for plastic and reconstructive surgery. Comput Otolaryngol 31(2):357–367, 1998.

16.Mann R. The evaluation and simulation of mobility aids for the blind. In Rotterdam Mobility Research Conference. American Foundation for the Blind, New York, 1965.

17.Mann R. Computer-aided surgery. In Proceedings of RESNA 8th Annual Conference. Rehabilitation Engineering Society of North America, Bethesda, MD, 1985.

18.Delp S. Surgery simulation: Using computer graphics models to study the biomechanical consequences of musculoskeletal reconstructions. In Proceedings of NSF Workshop on Computer-Assisted Surgery. National Science Foundation, Biomedical Engineering Section and the Robotics and Machine Intelligence Program. Washington, DC, 1993.

19.The Visible Human Project. National Library of Medicine, 1999. http:/ /www. nlm.nih.gov/research/visible/visible_human.html.

20.Ackerman M. Accessing the visible human project of the National Library of Medicine. D-Lib Magazine, 1995.

21.Satava R. Cybersurgery: Advanced Technologies for Surgical Practice. Wiley-Liss, Inc., New York, 1998.

22.Adelstein BR, Rosen JM. Design and implementation of a force reflecting manipulandum for manual control research. In ASME 1992: Advances in Robotics, DSC, 1992, pp. 1–12.

23.Madhani A, Niemeyer G, Salisbury JK. The Black Falcon: a teleoperated surgical instrument for minimally invasive surgery. In Int. Conf. on Intelligent Robots and Systems (IROS), Victoria, B.C., Canada, 1998.

24.Satava RM. Virtual reality surgical simulator: the first steps. Surg Endosc, 7(3):203– 205, 1993.

25.Fakespace Systems announces industry first: a fully reconfigurable display system for immersive visualization. 1999. http:/ /www.fakespace.com/press/101599.html.

26.Madhani A. Design of teleoperated surgical instruments for minimally invasive surgery. In Mechanical Engineering. Massachusetts Institute of Technology, Cambridge, MA, 1997.

Computer-Aided Soft Tissue Surgery

441

27.Krummel TM. Surgical simulation and virtual reality: the coming revolution [editorial]. Ann Surg, 228(5):635–637, 1998

28.Raibert M. Surgical certification using virtual reality. 1996

29.Bodily K. Surgeons and technology: presidential address. Am J Surg 177(5):351– 353, 1999.

30.Isenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA, 282(9):861–866, 1999.

31.Davis D, Thomson O’Brien MA, Freemantle N, et al. Impact of formal continuing medical education. JAMA 282(9):867–874, 1999.

32.Rosen JM. Virtual reality and plastic surgery. Adv Plast Reconstr Surg 13:33–47, 1996.