Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Visual Prosthetics Physiology, Bioengineering, Rehabilitation_Dagnelie_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
6.27 Mб
Скачать

320

M.P. Barry and G. Dagnelie

16.1  Introduction

Although not perfect models of visual prostheses, simulations of prosthetic vision provide insight into how these prostheses can theoretically function. While tests of actual prostheses suffer the burdens of device construction and implantation and all the associated costs and approvals, simulations are relatively simple to implement and require less regulatory oversight. As such, before any sizable clinical trials were possible, simulations of visual prostheses were used to investigate the usability of prosthetic vision. Now, as clinical trials move forward, prosthetic vision simulations still provide insight on how different elements of the technology interact and affect performance. Simulations of theoretical device designs also help guide developers in building next-generation prostheses.

The first studies utilizing simulations of prosthetic vision were published in 1992 by a group at the University of Utah [3–5]. Each of these initial studies simulated simple square grids of dots by covering a small screen (1.7° of visual field across) with a film containing chemically etched holes. Using this simulation scheme, Cha et al. evaluated normally sighted subjects with tests of visual acuity [3], reading [5], and wayfinding [4]. As the availability of technology progressed over time, simulations of prosthetic vision evolved to software-based implementations of visual prostheses; however the basic categories of tests have persisted, with a few additions: face and object recognition [14, 15, 21, 29, 30, 33, 34], hand–eye coordination [14, 15, 21, 29], visual tracking [20, 31], and purely computational tests [25]; most of these tasks can be implemented and explored in virtual [1, 7, 13, 29, 32] as well as real [4, 13, 15, 29] environments. These simulation studies, taken together, provide a wide range of knowledge on what may be possible with actual prostheses, what resolution and other device properties may be required for specific tasks, and in which directions prosthetic development should proceed.

In this chapter we will summarize studies that have been performed in these different categories. We will open with some general remarks about the ways simulations are implemented and some of the basic parameters that can be varied. For the sake of consistency, the words “array” and “phosphene” will be used only to refer to actual prostheses and their associated percepts, while “grid” and “dot” or “simulated phosphene” will be used to refer to simulations of visual prostheses.

16.2  Simulation Techniques and Basic Parameters

In a typical prosthetic vision simulation, a sighted individual is presented visual stimuli that approximate what the visual prosthesis wearer is expected to perceive. Typically, these are images in which the original resolution has been reduced to represent the stimulating array that is to be implanted in a blind subject, with individual dots or squares of light representing the phosphenes elicited at each point of stimulation.

Figure 16.1 shows the implementation of a prosthetic simulation commonly used in our laboratory, where pixelized images are presented in a video headset,

16  Simulations of Prosthetic Vision

321

Fig. 16.1Schematic arrangement in a typical prosthetic vision simulation. The filtering engine (top center unit) converts an incoming video stream from either a real or a virtual scene into pixelized imagery. The head-mounted display (HMD) in this arrangement is used to present the imagery to the subject and monitor the subject’s gaze through a built-in video camera observing the pupil. A scene camera mounted on the HMD can be used to provide live video for filtering. The pupil-tracking software (top left unit) provides the filtering engine with near-real time gaze information, allowing the imagery to be stabilized on the subject’s retina, simulating a fixed position of the stimulating array

with either a scene camera on the headset or rendering software under control of a gaming engine (HalfLife; Valve Software, Bellevue, WA). Other configurations may involve a monitor display, a hand-held or glasses-mounted camera, or other image capture and display methods. Central in all simulations is a processor that transforms the incoming video stream into an outgoing stream that fulfills the properties of prosthetic vision as they are envisioned by the experimenter.

16.2.1  Gaze Tracking and Image Stabilization

An important aspect of prosthetic vision with an external (head-worn, hand-held, or stand mounted) camera is the loss of the effects of eye movements to which every sighted person is accustomed. As illustrated in Fig. 16.2 (left panel) an eye movement executed by a sighted person makes the image of the object being observed shift across the retina, and hence across the projection areas in the visual cortex. The visual system deals with this by signaling to the visual cortex that an eye movement is being made, so the shift of the image is perceived as a stable rather than a shifting world. This situation changes dramatically (Fig. 16.2, central panel) if the image from a stationary external camera is presented to the visual system in the form of electrical impulses from a set of electrodes attached to the retina or higher visual centers: An eye movement executed by the prosthesis wearer will still signal the visual cortex that an image shift should happen, but since the camera and electrodes are stationary no such shift occurs, and the resulting percept is a disconcerting jump of the scene.

Retinal implants that perform image capture directly inside the eye will not have this problem, as the image will shift according to eye movements. Of the current