Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
8.75 Mб
Скачать

174 Karen K. De Valois and Russell L. De Valois

Smith, V. C., & Pokorny, J. (1975). Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm. Vision Research, 15, 161–171.

Sperling, H. G., & Harwerth, R. S. (1971). Red-green cone interactions in the increment-threshold spectral sensitivity of primates. Science, 172, 180–184.

Sternheim, C. E., & Boynton, R. M. (1966). Uniqueness of perceived hues investigated with a continuous judgmental technique. Journal of Experimental Psychology, 72, 770–776.

Swanson,W. H., Ueno,T., Smith,V. C., & Pokorny, J. (1987).Temporal modulation sensitivity and pulsedetection thresholds for chromatic and luminance perturbations. Journal of the Optical Society of America A, 4, 1992–2005.

Switkes, E., Bradley, A., & De Valois, K. K. (1988). Contrast dependence and mechanisms of masking interactions among chromatic and luminance gratings. Journal of the Optical Society of America A, 5, 1149–1162.

Teller, D. Y., & Lindsey, D. T. (1993). Motion at isoluminance: Motion dead zone in three-dimensional color space. Journal of the Optical Society of America A, 10, 1324–1331.

Thorell, L. G., De Valois, R. L., & Albrecht, D. G. (1984). Spatial mapping of monkey V1 cells with pure color and luminance stimuli. Vision Research, 24, 751–769.

Tootell, R. B. H., Hadjikhani, N. K., Liu, A. K., & Cavanagh, A. M. (1998). Color and retinopathy in human visual cortical area V8. Investigations in Ophthalmologic Vision Science, 39, S1129.

Troscianko, T,. & Fahle, M. (1988). Why do isoluminant stimuli appear slower? Journal of the Optical Society of America A, 5, 871–880.

Valberg, A., Lee, B. B., & Tidwell, D. A. (1986). Neurones with strong inhibitory S-cone inputs in the macaque lateral geniculate nucleus. Vision Research, 26, 1061–1064.

Valberg, A., Lee, B. B., & Tryti, J. (1987). Simulation of responses of spectrally-opponent neurones in the macaque lateral geniculate nucleus to chromatic and achromatic light stimuli. Vision Research, 27, 867–882.

van der Horst, G. J. C., & Bouman, M. A. (1969). Spatiotemporal chromaticity discrimination. Journal of the Optical Society of America, 59, 1482–1488.

van der Horst, G. J. C., de Weert, C. M. M., & Bouman, M. A. (1967). Transfer of spatial chromaticitycontrast at threshold in the human eye. Journal of the Optical Society of America, 57, 1260–1266.

Virsu, V., & Haapasalo, S. (1973). Relationships between channels for colour and spatial frequency in human vision. Perception, 2, 31–40.

von Bezold, W. (1876). The theory of colour (American edition). Boston: Prang.

van Kries, J. (1905). Die Gesichtsempndungen. In W. Nagel (Ed.), Handbuch der Physiologie des Menschen (pp. 109–282). Braunschweig: Vieweg.

Vos, J. J., & Walraven, P. L. (1971). On the derivation of the foveal receptor primaries. Vision Research, 11, 799–818.

Wässle, H., Boycott, B. B., & Röhrenbeck, J. (1989). Horizontal cells in monkey retina: Cone connections and dendritic network. European Journal of Neuroscience, 1, 421–435.

Wässle, H., Grünert, U., Martin, P. R., & Boycott, B. B. (1994). Immunocytochemical characterization and spatial distribution of midget bipolar cells in the macaque monkey retina. Vision Research, 34, 561–579.

Webster, M. A., De Valois, K. K., & Switkes, E. (1990). Orientation and spatial-frequency discrimination for luminance and chromatic gratings. Journal of the Optical Society of America A, 7, 1034–1049.

Werner, J. S., & Walraven, J. (1982). E ect of chromatic adaptation on the achromatic locus: the role contrast, luminance and background color. Vision Research, 22, 929–943.

Werner, J. S., & Wooten, B. R. (1979). Opponent chromatic mechanisms: Relation to photopigments and hue naming. Journal of the Optical Society of America, 69, 422–434.

Wright, W. D., & Pitt, F. H. G. (1934). Hue discrimination in normal colour vision. Proceedings of the Physics Society (London), 46, 459–473.

Wright, W. D., & Pitt, F. H. G. (1937). The saturation-discrimination of two trichromats. Proceedings of the Physics Society (London), 49, 329–331.

4. Color Vision

175

Yamamoto, T. L. (1997). Color-selective spatial tuning. Unpublished doctoral dissertation, University of California, Berkeley, CA.

Young, T. (1802). On the theory of light and colours. Philosophical Transactions of the Royal Society (London), 92, 12–48.

Zeki, S. M. (1973). Colour coding in rhesus monkey prestriate cortex. Brain Research, 53, 422–427.

This Page Intentionally Left Blank

C H A P T E R 5

Binocular Vision

Clifton Schor

I. PERCEIVED VISUAL DIRECTION

A. Oculocentric Direction

Under binocular viewing conditions we perceive a single view of the world as though seen by a single cyclopean eye. Singleness results from a mapping of the two visual elds onto a common binocular space. The topography of this map will be described subsequently as the horopter, which is an analytical tool that provides a reference for quantifying retinal image disparity. Stimulation of binocularly corresponding points by targets on the horopter results in percepts by each eye in identical visual directions (i.e., directions in reference to the point of binocular xation). This eye-referenced description of direction (oculocentric) can be transformed to a head-referenced description (egocentric direction) by including information about eye position as well as a reference point in the head from which the two eyes can judge direction.

B. The Cyclopean Eye

If we only had one eye, direction could be judged from the nodal point of the eye, a site where viewing angle in space equals visual angle in the eye, assuming the nodal point is close to the radial center of the retina. However, two eyes present a problem for a system that operates as though it only has a single cyclopean eye. The two

Seeing

Copyright © 2000 by Academic Press. All rights of reproduction in any form reserved.

177

178 Clifton Schor

eyes have viewpoints separated by approximately 6.5 cm. When the two eyes converge accurately on a near target placed along the midsagittal plane, the target appears straight ahead of the nose, even when one eye is occluded. In order for perceived egocentric direction to be the same when either eye views the near target monocularly, there needs to be a common reference point. This reference point is called the cyclopean locus or egocenter, and is located midway on the interocular axis. The location of the egocenter is found empirically by the site where perceptually aligned points at di erent depths in space are perceived to intersect the face. Thus the egocenter is the percept of a reference point for judging visual direction with either eye alone or under binocular viewing conditions. The validity of the egocenter is supported by sighting behavior in young children ( 2 years of age). When asked to sight targets through a tube, they place it between the eyes (Barbeito, 1983).

It is interesting to compare how visual directions are computed by animals with binocular visual systems that obey Hering’s law of ocular movements (ambiocular systems) and animals such as the chameleon that have independent eye movements (utrocular systems).The utrocular system computes visual directions separately with each eye. Because the eye is displaced from the head center, information about eye position and retinal image location for one eye is insu cient to specify direction with respect to the head. In the utrocular system, additional information about target distance is also needed to compute head-centric direction. In the absence of binocular disparity or convergence cues, utrocular animals use accommodation to specify target distance (Ott & Schae el, 1995). Accommodation, combined with eye position and retinal image locus, could provide su cient information to compute head-centric direction.

C. Egocentric Direction

Direction and distance can be described in polar coordinates as the angle and magnitude of a vector originating at the egocenter. For targets imaged on corresponding points, this vector is determined by the location of the retinal image and by the direction of gaze that is determined by versional or conjugate eye position. The angle the two retinal images form with the visual axes is added to the conjugate rotational vector component of binocular eye position (the average of right and left eye position). This combination yields the perceived egocentric direction. Convergence of the eyes, which results from disconjugate eye movements, has no inuence on perceived egocentric direction. Thus, when the two eyes xate near objects to the left or right of the midline in asymmetric convergence, only the version or conjugate component of the two eyes’ positions contributes to perceived direction. These facets of egocentric direction were summarized by Hering (1879) as ve laws of visual direction, and they have been restated by Howard (1982). The laws are mainly concerned with targets imaged on corresponding retinal regions (i.e., targets on the horopter).

5 Binocular Vision

179

D. Visual Directions of Disparate Images

How are visual directions judged for disparate targets (i.e., targets located nearer or farther than the horopter)? When target disparity is small and within Panum’s fusional area, such that the target appears single, egocentric direction is based upon the average retinal image locus of the two eyes, and it deviates from either monocular perceived direction of the disparate target by half the angular disparity. The consequence of averaging monocular visual directions of disparate targets is that binocular visual directions are mislocalized by half their retinal image disparity. Binocular visual directions can only be judged accurately for targets lying on the horopter. When retinal image disparity becomes large, disparate targets appear diplopic (i.e., they are perceived in two separate directions). The directions of monocular components of the diplopic pair are perceived as though each one was stimulated by a target on the horopter (i.e., the diplopic images are seen as though both had paired images on corresponding points in their respective contralateral eye).

E. Visual Direction of Partially Occluded Objects

There are ambiguous circumstances where a target in the peripheral region of a binocular eld is only seen by one eye because of occlusion by the nose. The monocular target could lie at a range of viewing distances; however, its direction is judged as though it was at the distance of the horopter, such that if it were seen binocularly, its images would be formed on corresponding retinal points (Barbeito & Simpson, 1991).

F. Violations of Hering’s Laws of Visual Direction

The rules suggested by Hering for computing visual direction apply in many circumstances. However, several violations of Herings’ rules for visual direction have been observed in both abnormal and normal binocular vision. In violation of Hering’s rules, unilateral-constant strabismics can have constant diplopia, and they use the position of their preferred xation eye to judge visual direction regardless of whether they xate a target with their preferred or deviating eye. Alternating strabismics use the position of whichever eye is xating to judge direction of objects (Mann, Hein, & Diamond, 1979). Both classes of strabismus use the position of only one eye to judge direction, whereas nonstrabismics use the average position of the two eyes to judge direction by either eye alone.

Two violations in normal binocular vision involve monocular images, and a third involves judgment of direction of binocular-disparate targets. Hering’s rules predict that if a target is xated monocularly, it will appear to move in the temporalward direction if the eyes accommodate, even if monocular xation remains accurate. The temporalward movement results from the nasalward movement of the

180 Clifton Schor

covered eye caused by the synkinesis between accommodation and convergence (Müller, 1843). Hering predicts that the average position of the eyes determines the egocentric direction of the foveated target. The rst violation occurs when the apparent temporalward motion is greater during monocular xation by one eye than the other. This violation resembles the perception of egocentric direction in constant-unilateral strabismus, and it may be related to an extreme form of eye dominance.

The second violation of Hering’s rules occurs when monocular targets are viewed in close proximity to disparate binocular targets, as might occur naturally in the periphery or in the vicinity of a proximal surface that occludes a portion of the background in the central visual eld. The direction of the monocular target is judged as though it was positioned at the same depth as the disparate binocular target rather than at the horopter. The visual system might assume there is an occluded counterpart of the monocular line in the contralateral eye that has the same disparity as the nearby binocular target, even though the image is seen only by one eye. The behavioral observation that is consistent with this hypothesis is that alignment of a monocular and binocular line is based on information presented only to the eye seeing both targets (Erkelens & van de Grind, 1994; Erkelens & Van Ee, 1997).

A third violation of Hering’s rules is demonstrated by the biasing of the visual direction of a fused disparate target when its monocular image components have unequal contrast (Banks, van Ee, & Backus, 1997). Greater weight is given to the retinal locus of the image that has the higher contrast. The average location in the cyclopean eye of the two disparate retinal sites is biased toward the monocular direction of the higher-contrast image. These are minor violations that mainly occur for targets lying nearer or farther from the plane of xation or distance of convergence. Since visual directions of o -horopter targets are mislocalized, even when Hering’s rules of visual direction are obeyed, the violations have only minor consequences.

II. BINOCULAR CORRESPONDENCE

We perceive space with two eyes as though they were merged into a single cyclopean eye. This merger is made possible by a sensory linkage between the two eyes that is facilitated by the anatomical superposition of homologous regions of the two retinae in the visual cortex. This is achieved by partial decussation that is a characteristic of visual systems with overlapping visual elds. The Newton Mueller-Sud- den law states that the degree of hemi-decussation is proportional to the amount of binocular overlap.

Why are the two retinal images matched at all? Primarily the matching allows us to reconstruct a 3-D world percept from a at 2-D image. Three-dimensional space can be derived geometrically by comparing the small di erences between the two retinal images that result from the slightly di erent vantage points of the two eyes caused by their 6.5-cm separation. Each eye sees slightly more of the tempo-

5 Binocular Vision

181

ral than nasal visual eld, and they also see more of the ipsilateral than the contralateral side of a binocularly viewed object. This yields stereopsis but comes at a price of a reduced visual eld from 360 to 190 . The binocular overlapping region is 114 and the remaining monocular portion is 37 for each eye.

A. Binocular Disparity

Binocular disparity results from the projection of 3-D objects onto two 2-D retinal surfaces that face the objects from slightly di erent angles and views or vantage points. The regions of the visual cortex that receive input from each eye are sensitive to various perspective di erences or disparities of the two retinal images. These disparities take the form of horizontal, vertical, torsional, and distortion or shear di erences between the two images. The disparities result from surface shape and depth as well as the direction and distance of gaze, and the torsion of the eyes (van Ee & Erkelens, 1996). These disparities are used to judge the layout of 3-D space and to sense the solidness or curvature of surfaces. Disparities are also used to break through camouage in images such as seen in tree foliage.

Description and quantication of binocular disparity requires a coordinate system that is primarily for our convenience, as many types of coordinate systems could accomplish this task, and we are uncertain what system is used to encode disparity by the visual system. The coordinate system requires a reference point from which to describe distance and direction. Because we are describing disparities of the two retinal images, a coordinate system that is typically chosen is retinal based rather than one that is head or world based. The reference point is the fovea, and distance from the fovea is traditionally described in Cartesian x and y components of azimuth and elevation, but a polar description could and has also been used (Liu, Stevenson, & Schor, 1994a).

Since retinal locations are described by the position of targets in space that are imaged on them, a transformation is needed to link retinal and visual space. The optical transformation is described above by visual direction. In computations of visual direction, retinal images are projected or sighted out through the nodal point of the eye, so that directions from objects in spaces to an image on the retina do not deviate from straight lines. As long as the eyes remain stationary, di erences in visual directions correspond to di erences in retinal locations.

B. Corresponding Retinal Points

Hering (1879) dened binocular correspondence by retinal locations in the two eyes, which when stimulated, resulted in a percept in identical visual directions. For a xed angle of convergence, some of these identical visual directions converged upon real points in space. In other cases, corresponding points have visual directions that do not intersect in real space. Accordingly, some corresponding regions of the two retinae might only be stimulated simultaneously by a real object under limited

182 Clifton Schor

circumstances. We shall see that this unique stimulus only occurs at innite viewing distances and that at nite viewing distances, only a small portion of corresponding points can be stimulated by real targets in space.

C. The Horizontal Horopter

The horopter is the locus in space of real objects or points whose images can be formed on corresponding retinal points. To appreciate the shape of the horopter, consider a theoretical case in which corresponding points are dened as homologous locations on the two retinae. Begin rst by considering binocular matches between the horizontal meridians or equators of the two retinae. Under this circumstance, corresponding retinal loci lie equidistant from their respective foveas, and the intersection of their visual directions in space denes the longitudinal horopter.

A geometric theorem states “any two points on a circle subtend equal angles at any other two points on the same circle.” Consider a circle that passes through the xation point and the two nodal points of the eyes. Let two points be the two nodal points and let two other points be the xation point and any other point on the circle. The theorem predicts that angles formed by the two nodal points and the other two points in space are equal. Because the angles pass through the nodal points they are also equal in the two eyes. One of the points is imaged on the two foveas, and the other point will be imaged at retinal loci that are equidistant from their respective foveas. By denition, these nonfoveal points are geometrically corresponding points. From this you can generalize that any point on this circle will be imaged at equal eccentricities from the two foveas on corresponding points in the two eyes except for the small arc of the circle that lies between the two eyes. This is the theoretical or geometric horopter. It was described by Alhazen (1989), Aguilonius (1613), and nally by Vieth and Muller and it bears their name (the Vieth-Muller [V-M] circle) (Figure 1).

The empirical horopter di ers from the theoretical horopter in two ways. It can be skewed or tilted about a vertical axis (as shown in Figure 2), and its curvature can be atter or steeper than the V-M circle (as shown in Figure 3).These two e ects are described by tting a conic section such as an ellipse through the empirical data, the xation point, and the nodal points of the eyes. At the xation point, the curvature variation of the best-t conic from a circle is referred to as the Hering-Hille- brand deviation. The skew or tilt of the t is described by an overall magnication of an array of points along the retinal equator in one eye that correspond to an array of points along the equator of the other eye. These deviations cause parts of the empirical horopter to lie either distal or proximal from the theoretical horopter. When this occurs, the points on the horopter no longer subtend equal angles at the two eyes. The spatial plot of the horopter shown in Figure 2 illustrates that points on the empirical horopter that are closer than the theoretical horopter subtend a smaller angle in the ipsilateral than contralateral eye. When this occurs, empirically measured corresponding points are not equidistant from their respective foveas.

FIGURE 1

5 Binocular Vision

183

Spatial plot of the horizontal horopter. When the eyes are converged on point F, the images, F , fall on the foveas and have zero disparity. Let be the binocular subtense of point F and let a circle pass through F and the nodal points of the eyes (the Vieth-Müller circle). The xation point F and any other point P on the circle subtend equal angles at the nodal points of the two eyes, and P is imaged at equal eccentricities on the two retinae. (From BINOCULAR VISION AND STEREOPSIS by Ian P. Howard and Brian J. Rogers, Copyright © 1995 by Oxford University Press, Inc. Used by permission of Oxford University Press, Inc., and the authors.)

FIGURE 2 Horopter slant produced by uniform magnication of corresponding points in one eye. A uniform magnication of corresponding points in the right eye, relative to the left eye causes the horizontal horopter to be skewed about a vertical axis away from the right (magnied) eye. (From BINOCULAR VISION AND STEREOPSIS by Ian P. Howard and Brian J. Rogers, Copyright © 1995 by Oxford University Press, Inc. Used by permission of Oxford University Press, Inc., and the authors.)

Соседние файлы в папке Английские материалы