Ординатура / Офтальмология / Английские материалы / Eye Movements A Window on Mind and Brain_Van Gompel_2007
.pdf674 |
J. B. Pelz and C. Rothkopf |
3. Discussion
Distributions of fixation duration and saccade size are low-level behavioral measures that can provide a window into oculomotor performance at a specific level. In the present experiment the distribution of fixation durations proved to be sensitive to some Environment and Task effects. Fixation durations were different in the two environments. Mean, median, and modal fixation durations were shorter in the Man-made environment than in the Wooded environment. The differences were due largely to a drop in the frequency of very short fixations (less than 200 ms) in the Free-view task in the Wooded environment. It is interesting to note that there was little difference between Man-made and Wooded environments in the Walking task, despite the fact that there is a more direct motor connection between subject and the environment while walking. In the Freeview task, where vision is used to gather information, but not to support motor actions, the low-level measure of fixation duration was more strongly affected by the type of environment.
Unlike the fixation-duration and saccade-size distributions, analysis of gaze position during the Walking task provided a more direct measure of the value of visual information gathered during foveations. Despite significant between-subject variation, there was a dramatic increase in the fraction of gaze directed to the path directly in front of the subject while walking in the Wooded environment. The difference between walking in the two environments can be expressed in terms of the ‘predictability’ of the travel path. When walking on a paved walkway there is an expectation that changes will be slow in coming, and significant changes will be signaled by differences that can be captured with peripheral vision. Walking on a dirt path, however, is not as predictable, and differences might require foveal acuity.
Unlike the results reported by Patla and Vickers (2003), two of the three subjects in the present study spent less than 25% of their time focusing on the 3-meter area directly before them in the man-made environment. These results are closer to those reported by Turano et al. (2001), in which normal subjects attempted to navigate a hallway with a limited field of view. Even in that case gaze was directed away from the immediate path approximately 75% of the time. Patla and Vickers’ (2003) result is surprising given the predictability of the regularly spaced footprints and especially in the condition with no specific instructions regarding placement of the feet while walking. It is possible that those subjects focused more on the specific travel path even when there were no explicit instructions to do so because subjects were well aware that the focus of the experiment was gaze placement on the path. In this experiment, as well as Turano et al. (2001), walking was a necessary subtask, not the instructed task. One element that should be considered in ‘real-world’ tasks is the ample supply of alternate fixation targets. Unlike a laboratory task that may present only a blank wall as an alternative to fixating the direct Path region, both Man-made and Wooded environments presented a huge number of potential fixation targets to compete with the Path region.
As we move from lab-based experiments with 2D stimuli toward exploring natural oculomotor behavior in mobile observers, the interactions between environment, task,
Ch. 31: Oculomotor Behavior in Natural and Man-Made Environments |
675 |
and top-down motivation will become ever more evident. As we try to understand truly natural oculomotor behavior, it is important that the eyetrackers we use to monitor that behavior interfere as little as possible with natural behavior. Minimizing both the physical constraints and the obtrusiveness of the systems is critical.
References
Andrews, T., & Coppola, D. (1999). Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments, Vision Research, 39, 2947–1953.
Babcock, J. S., & Pelz, J. (2004). Building a lightweight eyetracking headgear, ETRA 2004: Eye Tracking Research and Applications Symposium, San Antonio, Texas.
Delabarre, E. B. (1898). A method of recording eye-movements,” American Journal of Psychology, 9, 572–574. Findlay, J. M. & Walker, R. (1999). A model of saccade generation based on parallel processing and competitive
inhibiton. Behavioral and Brain Science, 22(4), 661–721.
Henderson, J. M., & Hollingworth, A. (1998). Eye movements during scene viewing: an overview. In G. Underwood (Ed.), Eye guidance in reading and scene perception (pp. 269–293). Amsterdam: Elsevier.
Itti. L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention.
Vision Research, 40, 1489–1506.
Land, M. F. (this volume). Fixations strategies during active behaviour; a brief history.
Land, M. F., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28, 1311–1328.
Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. Proceedings of the 7th International Conference on Artifical Intelligence, Aug 24–28, Vancouver, British Columbia, 674–679.
Parkhurst, D., & Niebur, E. (2003). Scene content selected by active vision. Spatial Vision, 16(2), 125–154. Patla, A., & Vickers, J. (2003). How far ahead do we look when required to step on specific locations in the
travel path during locomotion? Experimental Brain Research, 148, 133–138.
Pelz, J. B., & Canosa, R. (2001). Oculomotor Behavior and Perceptual Strategies in Complex Tasks, Vision Research, 41, 3587–3596.
Pelz, J. B., Canosa, R., Babcock, J., Kucharczyk, D., Silver, A., & Konno, D. (2000). Portable eyetracking: A study of natural eye movements, Proceedings of the SPIE, Human Vision and Electronic Imaging, San Jose, CA.
Rayner, K. (1984). Visual search in reading, picture perception and visual search. A tutorial review. In H. Bouma and D. Bouwhuis (Eds.), Attention and Performance X. 67–96. Hillsdale, NJ: Lawerance Erbaum.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422.
Rothkopf, C., & Pelz, J. B. (2004). Head movement estimation for wearable eye Tracker, ETRA 2004: Eye Tracking Research and Applications Symposium, San Antonio, Texas.
Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols.
Proceedings of the Eye Tracking Research and Applications Symposium, ACM Press, New York. Sicuranza, G., & Mitra, S. (2000). Nonlinear image processing (Communications, Networking and Multimedia),
Academic Press, Orlando, FL.
Tian, T. Y., Tomasi, C., & Heeger, D. J. (1996). Comparison of approaches to egomotion computation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 315–320. San Francisco, CA.
Tomasi, C., & Kanade, T. (1991). Shape from motion from image streams: A factorization method - 3. Detection and Tracking of Point Features, Technical Report CMU-CS-91-132, Carnegie Mellon University.
Turano, K. A., Geruschat, D. R., & Baker, F. H., (2003). Oculomotor strategies for the direction of gaze tested with a real-world activity, Vision Research, 43, 333–346.
676 |
J. B. Pelz and C. Rothkopf |
Turano, K. A., Geruschat, D. R., Baker, F. H., Stahl, J. W., & Shapiro, M. D. (2001). Direction of gaze while walking a simple route: Persons with normal vision and persons with retinitis pigmentosa. Optometry and Vision Science, 78, 667–675.
Wade, N. J. (this volume). Scanning the seen: Vision and the origins of eye movement research. Westheimer, G (this volume). Eye movement research in the 1950s.
Yarbus, A. L. (1961). Eye movements during examination of complex objects (Russian) Biofizika, 6.
Yarbus, A. L. (1967). Eye movements and vision (B. Haigh, Trans.). New York: Plenum Press. (from original work published 1955–1955–1961).
678 |
A. E. Patla et al. |
Abstract
Spatial-temporal gaze fixation patterns along with locomotion data as participants selected a safe route around obstacles to reach an exit point were analyzed to determine what information is critical and when it is needed for route selection. The results suggest that routes are not planned a priori but are based on visual information acquired during locomotion. During locomotion, gaze was intermittently fixated at various locations which provided useful information for steering control and collision avoidance. A new routeselection model was developed that more accurately predicted participants’ travel paths than two previous models, the on-line control and avoid-a-crowd models. The new model was guided by gaze fixation data and identifies safe corridors while minimizing path deviations from the end-goal to select a route.
Ch. 32: Gaze Fixation Patterns During Goal-Directed Locomotion |
679 |
Obtaining information about environmental features that are located at a distance is essential to move safely to the intended goal. While other sensory modalities such as the auditory system and the olfactory system can provide information about environmental features that are located at a distance, no modality comes close to providing as accurate and precise information about environmental features, static and dynamic, as vision. It is no wonder that most animal species rely on the visual system to guide them safely through their environment. However, the challenge has always been to identify what information is used and how it is used to regulate locomotor patterns.
One approach is to study gaze behavior: where people look and how gaze patterns change when approaching a target can provide unique insights into the nature of visual information used for planning and controlling limb movements in a cluttered terrain (Patla 2004; Sherk & Fowler, 2000). Our eyes provide us with a detailed visual image through intermittent foveation of specific locations/areas despite non-homogenous retinal acuity (Land, 1999). Saccades and fixations are the dominant eye-movement patterns observed during the performance of a variety of perceptual tasks such as viewing a picture (Yarbus, 1967), simple motor tasks like pointing (Neggers & Bekkering, 2001), approaching and stepping over an obstacle (Patla & Vickers, 1997) and performing complex procedural motor tasks such as tea making (Land, Mennie, & Rusted, 1999). Where the eyes fixate is not random: the location/object being fixated provides essential information for both perception and the control of action (Land, 1999). Therefore gaze patterns can provide very useful information on what is relevant to the task. Furthermore, the temporal relationship between the onset of gaze fixations and body movement changes can distinguish between online visual control (Hollands, Marple-Horvat, Henkes, & Rowan, 1995; Land & Lee, 1994; Neggers & Bekkering, 2001) and feed-forward control (Patla & Vickers, 2003).
Until recently, gaze behavior during locomotion was studied for tasks where the environment and/or instruction specified the action such as stepping over an obstacle (Patla & Vickers, 1997), changing direction of locomotion (Hollands, Patla, & Vickers, 2002), or stepping on specific targets (Hollands et al., 1995; Patla & Vickers, 2003). In these studies, no action choices had to be made. Therefore gaze fixations were guided by the task requirement, be it avoiding a particular obstacle or stepping on a particular target.
When the task only specifies the end goal in a cluttered environment, it is unclear how routes are selected. Specifically, is route selection based on on-line control using visual information about obstacles as they are encountered on the path to the end goal? Or does some preplanning occur in order to determine a route in a cluttered environment in advance? There is no research in the literature that has empirically examined gaze behavior during path selection in a cluttered environment. Hence, our primary objective in the current study is to document spatio-temporal patterns of gaze behavior while individuals walked through a cluttered environment to a goal that was visible from the start. This is the first study to document gaze behavior when the environment and/or instruction do not specify the travel path to be taken. While the description of the travel path that individuals took is useful, by itself it does not shed light on the type of information and/or the strategies used by individuals for path selection. We monitored participants’ gaze patterns during this task to find out what they fixated on and therefore what was important
680 |
A. E. Patla et al. |
for route selection and locomotion (Land & Hayhoe, 2001), and how they used vision to guide their travel path selection. Since major changes in travel path involve steering around obstacle(s), one would expect gaze fixations on objects that force participants to turn to be linked tightly with steering onset (Land & Lee, 1994).
The second objective was to evaluate two models, the on-line control and avoid-a-crowd model (Patla, Tomescu, & Ishac, 2004), which make different predictions regarding the travel path people take and to determine which is most consistent with the observed gaze patterns. Finally, we propose a new model, in part guided by the observed spatio-temporal gaze patterns, that more accurately predicts route selection (Fajen, Beem, & Warrren, 2002; Fajen & Warren, 2003; Patla et al., 2004).
1. Methods and materials
1.1. Participants
Five healthy University of Waterloo students volunteered for the study (2 females, 3 males; Age range: 19–31 years, average 23.6 years). The protocol was approved by the Office of Research Ethics at the University of Waterloo.
1.2. Experimental protocol
Twelve standard traffic pylons were used as obstacles that had to be avoided during walking. Each pylon had a square base l = 0 35 m on which a cone was mountedh = 0 72 m d = 0 28 m . A 9×13 grid with square cells l = 0 35 m was marked on the laboratory floor. The entire grid measured 4 55 m × 3 15 m. The 12 pylon locations on the grid cells were randomly generated with the conditions that (1) no two pylons were placed in adjacent cells (a pylon was never in one of the eight cells around another pylon) and (2) pylons were not placed directly in front of an entrance or exit point. There were three entrance and two exit points to the grid. A schematic diagram of the experimental setup is shown in Figure 1a.
Gaze location was monitored by an Applied Sciences Laboratories (ASL, Bedford, USA) 501 eye-tracker, which is a head-mounted device monitoring the eye and the scene (reflecting what the individual sees) to determine eye gaze relative to the headmounted optics (Hollands et al., 2002; Land & Hayhoe, 2001; Land et al., 1999; Patla & Vickers, 1997, 2003). The eye-tracking system weighs 8–9 oz, has an accuracy of 0 5 , and a resolution of 0 25 . Video information of the eye and the scene, together with video information collected by a room camera giving a frontal plane perspective of the environment, were combined via two digital mixers (Videonics, model MX-1) and recorded to DVD at 30 Hz. A typical video frame is shown in Figure 1b, with the recording from the eye camera in the top left corner, the room camera in the left panel and the scene camera in the right panel. Eye gaze is shown by the black square on the scene camera (the cursor on one of the pylons in Figure 1b).
Ch. 32: Gaze Fixation Patterns During Goal-Directed Locomotion |
681 |
E1 E2
12
11
10
9
8
7
6
5
2 |
3 |
4 |
S3
1
S1 S2
(a)
Eye image 
Cursor
Camera recording |
Camera recording |
person |
gaze |
|
(b) |
Figure 1. (a) A schematic diagram of the experimental setup with the 12 randomly arranged pylons, shown as circles, the three start positions S1 − S3 and the two exits or goals (E1 and E2). The solid line indicates the path taken by four out of five participants for the specific pylon arrangement. (b) Left: Video frame from the gaze analysis setup with an image of the eye in the top left hand corner. Right: Image of the person walking around the pylons with the gaze location indicated by the black square on the pylon.
682 |
A. E. Patla et al. |
The participants were informed that they had to walk to one of the two end points from their starting point and guided to walk to the starting point without looking at the pylon arrangement. They were instructed to hold a board obstructing their view of the environment ahead and to wait for the “go” signal. The board was removed when they heard the “go” signal. Participants were told to proceed as fast as possible to the goal, which was defined by two vertical posts spaced 35 cm apart. Five obstacle arrangements combined with six combinations of start and end points resulted in 30 trials for each participant. For each obstacle arrangement, the start and end points were randomized. Each trial in effect presented a different obstacle arrangement from the participant’s point-of-view, and posed a unique challenge for route selection. We used the same pylon arrangements and start and end positions between participants to evaluate if the selected paths were similar across participants.
1.3. Data analyses and results
In 86% of the trials four out of five participants chose the same route. There were no collisions with any pylons. For each trial, we traced the travel path taken by the participant and then analyzed the gaze fixation data. Frame-by-frame analysis of the combined video data was conducted to identify gaze fixations (Hollands et al., 2002; Land & Hayhoe, 2001; Land et al., 1999; Patla & Vickers, 1997; Patla & Vickers, 2003) on four locations: the travel path, goal, pylon region and elsewhere. Travel path fixations are fixations on the floor which fall on the travel path area. Goal is defined as either fixations on the goal posts, the area between the posts or an area slightly ahead of goal posts. The pylon region included the area defined by the gaze cursor around the pylons. Fixations on other pylons and spatial locations were categorized as elsewhere. A fixation was defined as gaze stabilized on a location for three consecutive frames ( 0.1 s) or longer (Patla & Vickers, 1997). Each fixation was recorded relative to the onset of gait initiation. After the appearance of the gaze cursor in the video data (reflecting the participant’s eye gaze) following the removal of the board, foot lift off for the first step identified the time for gait initiation. The gaze fixation data were grouped in two categories, those occurring between the appearance of the gaze cursor and gait initiation, and those occurring between the onset of gait initiation and reaching the goal. Typical spatial-temporal gaze fixation patterns for two trials from two different participants are shown in Figure 2.
A one-way repeated measures ANOVA was carried out on the relative gaze fixation frequency to determine if the fixation frequency was influenced by gaze location which includes pylon region, travel path, goal and elsewhere. This was done for both fixations before and after gait initiation. Post-hoc analyses for this and subsequent ANOVAs involved LSMEANS tests. Significance levels were set at 0.05 for all analyses.
1.3.1. Gaze fixation characteristics prior to gait initiation
Gait was initiated 0.49 s (SD: 0.134) after the appearance of the gaze cursor (which appeared when the board obstructing the participant’s view was removed). On average
