Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Visual Prosthetics Physiology, Bioengineering, Rehabilitation_Dagnelie_2011.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
6.27 Mб
Скачать

16  Simulations of Prosthetic Vision

337

In 2008, Boyle et al., of Queensland University of Technology in Australia, conducted experiments where subjects were asked to judge the quality of pixelized images [1]. No viewing constraints were used, and no actual wayfinding was performed in this study. Subjects, instead, viewed an original image and then judged which among a set of binary, 25 × 25 dot versions of that image they would find most useful for navigation. The authors applied various algorithms to highlight important or salient stimuli in an image, and investigated various approaches to zooming the image in on important features. The authors found that subjects did not favor any kind of feature-based processing when no zoom was applied. In the second phase of their study, however, subjects appeared to prefer a zooming method that trimmed away unimportant pixels based on a saliency map generated by a ­program from the University of Southern California. The final image was still 25 × 25 dots, but contained information from a smaller portion of the original image. The authors suggest that such saliency detection and zoom could be used for actual prosthetic devices.

Wang et al. in our laboratory published experiments on wayfinding in virtual environments later in 2008 [32]. Subjects viewed a virtual environment through a stabilized image of 6 × 10 dots spanning 16.2° × 27°. The authors investigated factors affecting time to traverse the virtual environment, including contrast, background noise, and dot dropout. Of these, only dot dropout had significant effect on performance. When dropout was set to 30%, completion time increased by 40%. This accents the importance of the number of dots available to subjects for wayfinding, as well as the importance of array integrity in visual prosthesis implantees.

Srivastava et al., in the same publication as their target counting and checker placing tasks [29], had subjects navigate similar virtual environments as those of Wang et al. and Dagnelie et al., using a cortical rather than a retinal grid layout. Unlike with Wang et al. and Dagnelie et al., however, these subjects did suffer a significant deterioration in performance with dot dropout. This may be a result of the large visual angle subtended by the cortical grid, causing a random reduction from 650 to 325 dots (at 50% dropout) to leave a sparser set than what was available in retinal simulations.

Jointly, these studies suggest that, so long as an optimal viewing angle is obtained [4], wayfinding performance is primarily dependent upon the number of dots presented in a grid simulation [4, 13]. Based on Dagnelie et al.’s [13] and Wang et al.’s [32] studies, current 6 × 10 electrode arrays placed in the retina should be sufficient for basic wayfinding, but performance will deteriorate in cases of significant electrode dropout.

16.6  Visual Tracking

Hallum et al., of the Universities of New South Wales and Newcastle, Australia, examined the abilities of subjects to track a target using grids generated with three different aperture weighting schemes [20]. Each of these grids contained 23 dots in a hexagonal layout, spanning about 7.4° × 5.4°, on a screen spanning 16.7° × 16.7°. A target sized 36 arcmin2 moved across the screen, generally in an S-shaped

338

M.P. Barry and G. Dagnelie

pattern, and subjects were asked to keep the grid centered over the target by moving a joystick. The first filtering scheme the authors employed was analogous to the setup of Cha et al. [3–5], where the stimulus was only sampled at the 23 locations corresponding to dot positions. The second scheme regionally averaged the underlying stimulus to determine dot characteristics. The third scheme employed a Gaussian sensitivity profile with s = 34.4¢. Subjects could freely view the screen binocularly. The authors found that, after practice, the use of a Gaussian profile permits more accurate fixation on, saccades to, and pursuit following the targets. The authors advocate use of this filtering technique to actual prostheses to improve performance in visual tasks, but they do not discuss the possibility that the unrestricted gaze and the choice of stimuli may have skewed their findings.

Wang et al. published a study with a similar experimental setup in 2008 [31]. Subjects viewed, monocularly, an 0.94° target moving across a screen, either with natural vision or through a 10 × 10 cell grid. The subjects were asked to follow the target with their eyes, monitored by an infrared eye tracker. When simulating prosthetic vision, the rendered grid would move with the subject’s eye movements, corresponding to an array placed over the fovea, superior retina, or nasal retina. The authors found that, compared to using natural vision, subjects took about 65% longer to detect sudden target movements in simulated prosthetic vision. This is comparable to the 20% increase in reaction time over natural vision reported by Hallum et al. [20], when considering differences in viewing conditions between these two studies. The authors also found that horizontal eye movements were more unstable in simulated prosthetic vision, particularly for stabilized stimulus placement on the nasal retina. Simulating superior retinal placement offered more stability than nasal placement, but not as much as foveal placement. The authors conclude that, although initiation is slower and movement is less stable, pursuit eye movements should be possible with optoelectronic retinal prostheses and that, among peripheral retinal locations for a prosthesis, superior placement may be more beneficial than inferior placement.

16.7  Computational Simulations

Pezaris et al., of Harvard Medical School, are pursuing a visual prosthesis within the lateral geniculate nucleus (LGN) of the thalamus [25]. In 2009, these authors published results of a study they conducted to evaluate four different basic prosthesis designs with various values of electrode spacing. Unlike all other simulation studies mentioned in this chapter, however, Pezaris et al. used a purely computational approach to determine the likely benefit of each design’s implementation.

The authors specifically investigated prosthesis designs where electrode tips are placed in a 3D grid throughout the LGN or along a 2D slice through LGN, and where activation is brought through two different forms of deep brain stimulation (DBS) electrodes. According to their analyses, the 3D grid would be the most useful for creating many phosphenes throughout the visual field. Among the

16  Simulations of Prosthetic Vision

339

possibilities for laying out a 2D grid in LGN, with much less of the visual field being stimulated, placement along a sagittal slice midway through the LGN would seem to be best. DBS electrodes that can contain as many as 60 microwires could create stimulation similar to the 2D plane. Traditional DBS electrodes, however, would only offer three to four large conductive cuffs on their shafts, which would provide stimulation more akin to solid curves or arcs. While the implementation of an LGN prosthesis through a traditional DBS electrode may be technically and surgically simpler, it would provide little useful stimulation. On the other hand, a full 3D grid, which would provide selective stimulation for the entire visual field, would be much more challenging. Future studies may shed more light on exactly how much benefit each design could provide.

16.8  Conclusion

Many of the designs found by these simulation studies to offer desirable performance in common visual tasks exceed current technological realities. For example, numerous reading studies concluded that grids containing about 600 dots are required for reading speeds of 50–70 words/min. Some studies, however, purposely used grid configurations that are currently in use by visual prostheses. These studies, limiting grid sizes to about 4 × 4 or 6 × 10 dots, found that even simple prosthesis designs can be used for modest performance on everyday visual tasks. While not as applicable to guiding performance expectations for current devices, simulations that represent more complex designs do guide developers in expanding this technology and provide specific goals for prosthesis structure and function.

References

1.Boyle JR, Maeder AJ, Boles WW (2008), Region-of-interest processing for electronic visual prostheses. J Electron Imaging, 17(1): p. 013002.

2.Cai S, Fu L, Zhang H, et al. (2005), Prosthetic visual acuity in irregular phosphene arrays under two down-sampling schemes: a simulation study. Conf Proc IEEE Eng Med Biol Soc,

5: p. 5223–6.

3.Cha K, Horch K, Normann RA (1992), Simulation of a phosphene-based visual field: visual acuity in a pixelized system. Ann Biomed Eng, 20: p. 439–49.

4.Cha K, Horch KW, Normann RA (1992), Mobility performance with a pixelized vision system.

Vision Res, 32(7): p. 1367–72.

5.Cha K, Horch KW, Normann RA, Boman DK (1992), Reading speed with a pixelized vision system. J Opt Soc Am A, 9(5): p. 673–7.

6.Chen SC, Hallum LE, Lovell NH, Suaning GJ (2005), Learning prosthetic vision: a virtual– reality study. IEEE Trans Neural Syst Rehabil Eng, 13(3): p. 249–55.

7.Chen SC, Hallum LE, Lovell NH, Suaning GJ (2005), Visual acuity measurement of prosthetic vision: a virtual–reality simulation study. J Neural Eng, 2(1): p. S135–45.

8.Chen SC, Hallum LE, Suaning GJ, Lovell NH (2006), Psychophysics of prosthetic vision: I. Visual scanning and visual acuity. Conf Proc IEEE Eng Med Biol Soc, 1: p. 4400–3.

340

M.P. Barry and G. Dagnelie

9.Chen SC, Hallum LE, Suaning GJ, Lovell NH (2007), A quantitative analysis of head movement behaviour during visual acuity assessment under prosthetic vision simulation. J Neural

Eng, 4(1): p. S108–23.

10. Chen SC, Lovell NH, Suaning GJ (2004), Effect on prosthetic vision visual acuity by filtering schemes, filter cut-off frequency and phosphene matrix: a virtual reality simulation. Conf Proc IEEE Eng Med Biol Soc, 6: p. 4201–4.

11. Dagnelie G (2008), Psychophysical evaluation for visual prosthesis. Annu Rev Biomed Eng,

10: p. 339–68.

12. Dagnelie G, Barnett D, Humayun MS, Thompson RW, Jr. (2006), Paragraph text reading using a pixelized prosthetic vision simulator: parameter dependence and task learning in freeviewing conditions. Invest Ophthalmol Vis Sci, 47(3): p. 1241–50.

13. Dagnelie G, Keane P, Narla V, et al. (2007), Real and virtual mobility performance in simulated prosthetic vision. J Neural Eng, 4(1): p. S92–101.

14. Dagnelie G, Thompson RW, Barnett D, Zhang W (2001), Simulated prosthetic vision: Perceptual and performance measures. In: Vision Science and its Applications, OSA Technical Digest. Washington, DC: Optical Society of America: p. 43–6.

15.Dagnelie G, Walter M, Yang L (2006), Playing checkers: Detection and eye–hand coordination in simulated prosthetic vision. J Mod Opt, 53: p. 1325–42.

16. Fornos AP, Sommerhalder J, Rappaz B, et al. (2006), Processes involved in oculomotor adaptation to eccentric reading. Invest Ophthalmol Vis Sci, 47(4): p. 1439–47.

17. Fornos AP, Sommerhalder J, Rappaz B, et al. (2005), Simulation of artificial vision, III: do the spatial or temporal characteristics of stimulus pixelization really matter? Invest Ophthalmol Vis Sci, 46(10): p. 3906–12.

18. Hallum LE, Cloherty SL, Lovell NH (2008), Image analysis for microelectronic retinal prosthesis. IEEE Trans Biomed Eng, 55(1): p. 344–6.

19. Hallum LE, Suaning GJ, Lovell NH (2004), Contribution to the theory of prosthetic vision.

ASAIO J, 50(4): p. 392–6.

20.Hallum LE, Suaning GJ, Taubman DS, Lovell NH (2005), Simulated prosthetic visual fixation, saccade, and smooth pursuit. Vision Res, 45(6): p. 775–88.

21. Hayes JS, Yin VT, Piyathaisere D, et al. (2003), Visually guided performance of simple tasks using simulated prosthetic vision. Artif Organs, 27(11): p. 1016–28.

22. Humayun MS, Dorn JD, Ahuja AK, et al. (2009), Preliminary 6 month results from the Argus™ II epiretinal prosthesis feasibility study. Conf Proc IEEE Eng Med Biol Soc,

1: p. 4566–8.

23. Kelly AJ, Yang L, Dagnelie G (2004), The effects of stabilization, font scaling and practice on reading in simulated prosthetic vision. Invest Ophthalmol Vis Sci, 45: p. ARVO E-abstr. #5436.

24. Perez Fornos A, Sommerhalder J, Pittard A, et al. (2008), Simulation of artificial vision: IV. Visual information required to achieve simple pointing and manipulation tasks. Vision Res, 48(16): p. 1705–18.

25. Pezaris JS, Reid RC (2009), Simulations of electrode placement for a thalamic visual prosthesis. IEEE Trans Biomed Eng, 56(1): p. 172–8.

26. Sommerhalder J, Oueghlani E, Bagnoud M, et al. (2003), Simulation of artificial vision: I. Eccentric reading of isolated words, and perceptual learning. Vision Res, 43(3):

p. 269–83.

27.Sommerhalder J, Rappaz B, de Haller R, et al. (2004), Simulation of artificial vision: II. Eccentric reading of full-page text and the learning of this task. Vision Res, 44(14): p. 1693–706.

28. Sommerhalder JR, Fornos AP, Chanderli K, et al. (2006), Minimum requirements for mobility inunpredictible environments. Invest Ophthalmol Vis Sci, 47: p. ARVO E-abstr. #3204.

29. Srivastava NR, Troyk PR, Dagnelie G (2009), Detection, eye–hand coordination and virtual mobility performance in simulated vision for a cortical visual prosthesis device. J Neural Eng,

6(3): p. 035008.

30. Thompson RW, Jr., Barnett GD, Humayun MS, Dagnelie G (2003), Facial recognition using simulated prosthetic pixelized vision. Invest Ophthalmol Vis Sci, 44(11): p. 5035–42.

16  Simulations of Prosthetic Vision

341

31. Wang L, Yang L, Dagnelie G (2008), Initiation and stability of pursuit eye movements in simulated retinal prosthesis at different implant locations. Invest Ophthalmol Vis Sci, 49(9): p. 3933–9.

32. Wang L, Yang L, Dagnelie G (2008), Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing. Optom Vis Sci, 85(11): p. E1057–63.

33. Zhao Y, Lu Y, Tian Y, et al. (2010), Image processing based recognition of images with a limited number of pixels using simulated prosthetic vision. Inf Sci, 180: p. 2915–24.

34. Zhao Y, Tian Y, Liu H, et al. (2008), Pixelized images recognition in simulated prosthetic vision. IFMBE Proc, 19: p. 492–6.