Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Ординатура / Офтальмология / Английские материалы / books.google.com / Visual Perception Fundamentals of awareness multi-sensory integration and high-order perception_Martinez-Conde_2006

.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
13.4 Mб
Скачать

indicate that the responses of 49 of 99 neurons (49%) were significantly influenced by sound location (po0.05, one-way ANOVA of sound location effect on neural responses).

ANOVA considers sound location as a categorical variable and, as a result, does not convey information about the coding format used to represent sound location by IC neurons. To further assess sound location sensitivity, and in particular, whether individual auditory neurons responded monotonically or nonmonotonically across changes in sound location, we compared the adequacy of Gaussian, sigmoid, and linear functions at capturing the relationship between discharge rate and sound location (Figs. 2C, F). A monotonic code can be dissociated from a nonmonotonic code by the relationship between how well (or poorly) the Gaussian and sigmoid functions fit the data. If both functions fit the data equally well, a monotonic (rate) code is suspected; however, if Gaussian functions fit the data better than sigmoids, a nonmonotonic (place) code is likely. This is because neurons that respond nonmonotonically across different sound locations will usually require a nonmonotonic function to fit the relationship between discharge rate and sound location; a monotonic function will be ineffective. In contrast, if neurons respond monotonically as a function of sound location, then the responses can usually be fit by either a sigmoid or a Gaussian function. The Gaussian and sigmoid fits can both describe monotonic data nearly equally well because a Gaussian consists of two monotonic halves that can be analogous to a sigmoid over a limited range of space.

Fifty-two neurons (53%) of our population of 99 neurons were successfully fit by all three functions (po0.05, linear, sigmoidal, and/or Gaussian). The sigmoid and Gaussian function fits for these individual neurons were by and large identical to each other in shape and R2 values. In contrast, only three neurons (3%) had responses that were significantly described by just the Gaussian function and not the monotonic functions. Given a criterion p value of 0.05, three neurons out of 99 is effectively a chance proportion. Moreover, the quality of the fits for the three neurons fit by just the Gaussian was very poor and seemed to indicate chance fluctuations in neural responsiveness

317

instead of actual nonmonotonic spatial sensitivity. Taken together, these results suggest that the sensitivity for sound location among IC neurons is monotonic, since Gaussians were rarely better than sigmoids at capturing the responses.

There was good correspondence between the ANOVA and the function fitting, which suggests that the chosen functions appropriately captured the pattern of the data of the 49 neurons whose responses were determined by the ANOVA to be significantly influenced by the location of sounds, 46 had responses that were successfully fit by all three functions and 48 had responses that were fit by at least one of the three functions.

Like the function fitting results, some other features of our data were more consistent with a rate code format. To begin with, the inflection points of the statistically significant sigmoid functions were clustered around the midline (Fig. 3A), although the slopes varied considerably. Such a clustering of inflection points is incompatible with a place code, which requires a distribution of receptive fields across the full extent of auditory space.

Stronger responses to contralateral sound locations were also obvious in individual neurons, e.g., the neurons shown in Fig. 2, as well as in the family of sigmoidal fits shown in Fig. 3A. This contralateral bias was also present in the entire population: on average, neurons fired about twice as strongly to the most contralateral sound location compared with the most ipsilateral location. Perhaps not surprisingly, the bias for contralateral locations was most marked in the population of neurons that showed statistically significant effects of sound location by ANOVA. A contralateral bias is not necessarily indicative of either coding format, since the receptive fields in a place code could also be clustered in the contralateral hemifield. However, in the absence of circumscribed receptive fields, a contralateral bias suggests that the majority of neurons are responding monotonically with maximal responses elicited by the most contralateral sound location.

Another feature of our data more consistent with a rate code than with the tuned receptive fields of a place code was the broad spatial sensitivity of our sample of IC neurons. The neurons in Fig. 2 responded to at least half of the sound

318

A 100

maximum)of(%

50

 

Response

0

 

 

-50

-60

-30

0

30

60

90

 

 

-90

 

B

 

 

Sound location (deg)

 

 

100

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ipsi

 

 

 

 

 

contra

neurons

80

 

 

 

 

 

 

 

 

60

 

 

 

 

 

 

 

 

of

 

 

 

 

 

 

 

 

 

Percent

40

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

20

 

 

 

 

 

 

 

 

 

0

 

-60

-30

0

 

30

60

90

 

-90

 

 

Sound location (deg)

Fig. 3. (A) The family of statistically significant sigmoidal fits, normalized to the maximum response rate of each neuron. Note that the inflection points are clustered around the midline, but that the slopes can range from shallow to steep. (B) Proportion of cells responding as a function of sound location. Cells were classified as responding to a sound location if the number of spikes during the response window after stimulus onset differed significantly from those during a comparable window of time prior to sound onset (paired t-test: po0.05). Only sound locations for which at least 10 cells were tested with at least five trials each are included. All neurons, not just those with demonstrable spatial sensitivity, are included in this analysis. (These panels were reproduced with permission from Groh et al., 2003.)

locations tested. Fig. 3B illustrates the ‘‘point image’’ of activity in the entire population of IC neurons as a function of sound location. All 99 neurons were included in this analysis. The proportion of neurons that showed statistically

significant responses to sounds increased from a low of about 30% for the most ipsilateral sound location to a high of about 80% for the most contralateral location.

Discussion

Our findings agree with previous studies in other species showing that a substantial population of neurons in the primate IC is sensitive to the spatial origination of sounds. The relatively smaller population of neurons that seemed insensitive to sound location may be involved in processing nonspatial attributes of sounds, or their sound location sensitivity may be revealed under different conditions, such as at lower sound intensities or when stimuli vary in elevation instead of azimuth.

The sensitivity of IC neurons to sound location appears broad, without a pattern of circumscribed receptive fields. Neural responses tend to increase monotonically for sounds located more contralaterally, a pattern that has also been emphasized in a recent work by McAlpine et al. (2001). Although the spatial sensitivity of neurons is broad, the population of active neurons is obviously larger in the IC contralateral to the location of the sound than it is in the ipsilateral IC. Therefore, comparing the relative levels of activity in the two colliculi could provide recipient neurons with the necessary information to infer the azimuth of a sound source.

Perhaps under different experimental conditions some additional nonmonotonic sensitivity might be observed, but it seems clear that the kind of prominent nonmonotonic sensitivity needed for a place code for sound location in the primate IC is unlikely to emerge. To be useful, such a place code should be capable of representing the locations of all sounds, regardless of sound level or frequency. A sizeable population of neurons with receptive fields distributed across the sampled region of space would be needed for a place code and we found no evidence of such a population.

As mentioned in the Introduction, a rate code for sound location is different in format from the coding format used in vision or touch where stimulus location can be inferred from the location of active receptors on the sensory epithelia. Before signals can be combined, it is computationally

advantageous, and perhaps even mandatory, to encode them in a common format. Merging of visual and auditory information could either occur in a place-coded format, in which case auditory signals should be transformed from a rate code to a place code, or in a rate-coded format, requiring the translation of visual signals from a place code to a rate code. If the latter transformation occurs, then both visual and auditory signals would be represented in the same format generally used in motor pathways. If the former transformation occurs, then it may be necessary to convert both visual and auditory signals back to a rate code before motor pathways can be accessed. In general, not much is known about which of these transformations occur (for discussion, see Groh, 2001), but it is worth considering how such transformations might in principle be accomplished.

A place code for sound location could certainly be created from the rate code found in the primate IC. Three candidate circuits for such a transformation are illustrated in Figs. 4A–C. The first involves combining signals from the IC on opposite sides of the brain in an additive fashion (Fig. 4A), while the second option involves combining sigmoidal signals with different inflection points from IC neurons on the same side of the brain in a subtractive fashion (Fig. 4B). The third (Fig. 4C) uses a single rate-coded input and a set of inhibitory interneurons and output units with graded thresholds, so that each output unit will be driven by only a limited range of activity levels in the input unit. This mechanism is a portion of the vector subtraction model of Groh and Sparks (1992).

Theoretically, any of these methods could be used to create neurons with tuning for sound location. In the first two models, robustness to changes in responses due to factors other than sound location, such as sound level or frequency, can be achieved by balancing out these nonspatial factors across the two inputs. For example, if the inputs come from the left and right IC, and both of these inputs show an increase in activity to louder sounds (shown here as a shift of the inflection point towards the ipsilateral side), the effect will be to broaden the spatial sensitivity of the output but there will be no shift in the center of the receptive field. If the inputs come from the same IC, then the

319

same comparative robustness to sound level can be achieved by pairing a unit whose response increases with sound level with a unit whose response decreases with sound level. Again, the outcome will be a broadening of the resulting receptive field for louder sounds, but not a shift in the location of the receptive field. The third model could also be robust to sound level if the inputs consisted of a set of rate-coded units with different patterns of sensitivity to sound level.

Two empirical observations provide constraints on the plausibility of each of these models. First, we did not observe a distribution of the inflection points in the sigmoidal spatial sensitivity functions. Instead, the inflection points tended to cluster near the midline. This presents a problem for the first two models, but not for the third model. A second relevant finding is that unilateral lesions of the IC in cats (Jenkins and Masterton, 1982) and humans (Litovsky et al., 2002) disrupt sound localization only in the contralateral hemisphere. This is a problem for the opposite-sides model (Fig. 4A) because a comparable unilateral lesion in the model would cause deficits in sound localization across all of the space and not only in the contralateral field.

It should also be possible to create a rate code for visual signals from the original place-code format. Three possible models for how the brain does this have been proposed previously (Groh, 2001) and include weighted summation (Fig. 4D), weighted averaging (Fig. 4E), and summation with saturation (Fig. 4F). The weighted summation model does not normalize the activity of the inputs, so that if the input signals vary with some nonspatial parameter such as contrast, the output will vary with contrast as well. The averaging and summation-with-saturation models both normalize for nonspatial signals. Of course, all six of these models merely illustrate that a conversion from a rate to a place code or vice versa is possible. Additional experimental details will be needed to identify which (if any) of these possibilities is likely.

Conclusions

Information regarding the location of and relationship between sounds in space is not immediately

320

A

Dischargelevel

ofneuronsIC

loud

soft

left IC

 

Sound location

 

D

 

Input: place code

 

activity

right IC

sensory parameter value

 

ai

+

+

Output: wi rate code

Sound location

B E

Place code

Dischargelevel

neuronsof IC

right IC

 

right IC

Numerator channel

Denominator

channel

 

 

Sound location

-

+

Rate code

F

C

Sound location

 

Place code

Input: rate code

Denominator

 

channel

 

Numerator channel

Inhibitory interneurons

Inhibitory

interneuron

and output units with

with threshold

graded thresholds

 

Output: place code

 

 

Rate code

Fig. 4. Six possible circuits for converting between monotonic and nonmonotonic spatial sensitivity. The circuit in panel (C) is a component of the vector subtraction model of Groh and Sparks (1992). Panels (D)–(F) were reproduced with permission from Groh (2001).

available from the receptor surface in audition. Instead sound location must be computed from binaural and spectral cues. At the level of the IC, this code resembles a rate code. This difference in the acquisition of positional information may be the reason that audition uses a representational format that differs from the one used by vision or touch. However, these differences in representational format between the sensory systems likely need to be reconciled so that information from the different modalities may be combined or used to direct common motor output pathways. Therefore, it seems that for eventual integration with one another, the sensory signals require two transformations, into a common format as well as into a common reference frame. Understanding exactly how these transformations occur, in particular which sensory signals undergo a change in format, awaits additional investigation.

Acknowledgments

We thank Uri Werner-Reiss for suggesting the model in Fig. 4A and we thank Abigail Underhill for expert technical assistance with this study. We also thank Kimberly Rose Clark and Amanda S. Trause, who assisted in the collection of the data for this study, and Ryan Metzger, O’Dhaniel Mullette-Gillman, Uri Werner-Reiss, Yale Cohen, Larry Mays, and Howard Hughes who provided helpful comments on all aspects of this work. We are grateful to the following funding sources: NIH NS 44666-03 (K.K.P.), Alfred P. Sloan Foundation (J.M.G.), McKnight Endowment Fund for Neuroscience (J.M.G.), Whitehall Foundation (J.M.G.), John Merck Scholars Program (J.M.G.), ONR Young Investigator Program (J.M.G.), EJLB Foundation (J.M.G.), The Nelson A. Rockefeller Center at Dartmouth (J.M.G.), NIH NS 17778-19 (J.M.G.), NIH NS50942-01 (J.M.G.), NSF 0415634 (J.M.G.).

References

Aitkin, L.M., Gates, G.R. and Phillips, S.C. (1984) Responses of neurons in inferior colliculus to variations in sound-source azimuth. J. Neurophysiol., 52: 1–17.

321

Aitkin, L.M. and Martin, R.L. (1987) The representation of stimulus azimuth by high best-frequency azimuth-selective neurons in the central nucleus of the inferior colliculus of the cat. J. Neurophysiol., 57: 1185–1200.

Aitkin, L.M., Pettigrew, J.D., Calford, M.B., Phillips, S.C. and Wise, L.Z. (1985) Representation of stimulus azimuth by low-frequency neurons in inferior colliculus of the cat. J. Neurophysiol., 53: 43–59.

Andersen, R.A. (1997) Multimodal integration for the representation of space in the posterior parietal cortex. Philos. Trans. R. Soc. Lond. B Biol. Sci., 352: 1421–1428.

Andersen, R.A., Bracewell, R.M., Barash, S., Gnadt, J.W. and Fogassi, L. (1990) Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. J. Neurosci., 10: 1176–1196.

Andersen, R.A., Essick, G.K. and Siegel, R.M. (1985) Encoding of spatial location by posterior parietal neurons. Science, 230: 456–458.

Andersen, R.A. and Mountcastle, V.B. (1983) The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J. Neurosci., 3: 532–548.

Andersen, R.A., Snyder, L.H., Batista, A.P., Buneo, C.A. and Cohen, Y.E. (1998) Posterior parietal areas specialized for eye movements (LIP) and reach (PRR) using a common coordinate frame. Novartis Found. Symp., 218: 109–122.

Andersen, R.A. and Zipser, D. (1988) The role of the posterior parietal cortex in coordinate transformations for visualmotor integration. Can. J. Physiol. Pharmacol., 66: 488–501.

Batista, A.P., Buneo, C.A., Snyder, L.H. and Andersen, R.A. (1999) Reach plans in eye-centered coordinates. Science, 285: 257–260.

Binns, K.E., Grant, S., Withington, D.J. and Keating, M.J. (1992) A topographic representation of auditory space in the external nucleus of the inferior colliculus of the guinea-pig. Brain Res., 589: 231–242.

Bock, G.R. and Webster, W.R. (1974) Coding of spatial location by single units in the inferior colliculus of the alert cat. Exp. Brain Res., 21: 387–398.

Boussaoud, D. (1995) Primate premotor cortex: modulation of preparatory neuronal activity by gaze angle. J. Neurophysiol., 73: 886–890.

Brainard, M.S. and Knudsen, E.I. (1993a) Experience-depend- ent plasticity in the inferior colliculus: a site for visual calibration of the neural representation of auditory space in the barn owl. J. Neurosci., 13: 4589–4608.

Brainard, M.S. and Knudsen, E.I. (1993b) Visual calibration of the neural representation of auditory space in the barn owl. Biomed. Res., 14: 35–40.

Bremmer, F. (2000) Eye position effects in macaque area V4. Neuroreport, 11: 1277–1283.

Bremmer, F., Distler, C. and Hoffmann, K.P. (1997a) Eye position effects in monkey cortex. II. Pursuitand fixationrelated activity in posterior parietal areas LIP and 7A. J. Neurophysiol., 77: 962–977.

Bremmer, F., Graf, W., Ben Hamed, S. and Duhamel, J.R. (1999) Eye position encoding in the macaque ventral intraparietal area (VIP). Neuroreport, 10: 873–888.

322

Bremmer, F., Ilg, U.J., Thiele, A., Distler, C. and Hoffmann, K.P. (1997b) Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST. J. Neurophysiol., 77: 944–961.

Bremmer, F., Pouget, A. and Hoffmann, K.P. (1998) Eye position encoding in the macaque posterior parietal cortex. Eur. J. Neurosci., 10: 153–160.

Cohen, Y.E. and Andersen, R.A. (2000) Reaches to sounds encoded in an eye-centered reference frame. Neuron, 27: 647–652.

Colby, C.L. (1998) Action-oriented spatial reference frames in cortex. Neuron, 20: 15–24.

Colby, C.L., Duhamel, J.R. and Goldberg, M.E. (1995) Oculocentric spatial representation in parietal cortex. Cereb. Cortex, 5: 470–481.

DeBello, W.M., Feldman, D.E. and Knudsen, E.I. (2001) Adaptive axonal remodeling in the midbrain auditory space map. J. Neurosci., 21: 3161–3174.

Delgutte, B., Joris, P.X., Litovsky, R.Y. and Yin, T.C. (1999) Receptive fields and binaural interactions for virtual-space stimuli in the cat inferior colliculus. J. Neurophysiol., 81: 2833–2851.

Duhamel, J.R., Bremmer, F., BenHamed, S. and Graf, W. (1997) Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389: 845–848.

Feldman, D.E., Brainard, M.S. and Knudsen, E.I. (1996) Newly learned auditory responses mediated by NMDA receptors in the owl inferior colliculus. Science, 271: 525–528.

Feldman, D.E. and Knudsen, E.I. (1997) An anatomical basis for visual calibration of the auditory space map in the barn owl’s midbrain. J. Neurosci., 17: 6820–6837.

Feldman, D.E. and Knudsen, E.I. (1998a) Experience-depend- ent plasticity and the maturation of glutamatergic synapses. Neuron, 20: 1067–1071.

Feldman, D.E. and Knudsen, E.I. (1998b) Pharmacological specialization of learned auditory responses in the inferior colliculus of the barn owl. J. Neurosci., 18: 3073–3087.

Galletti, C. and Battaglini, P.P. (1989) Gaze-dependent visual neurons in area V3A of monkey prestriate cortex. J. Neurosci., 9: 1112–1125.

Galletti, C., Battaglini, P.P. and Fattori, P. (1995) Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey. Eur. J. Neurosci., 7: 2486–2501.

Goossens, H.H. and van Opstal, A.J. (1999) Influence of head position on the spatial representation of acoustic targets. J. Neurophysiol., 81: 2720–2736.

Groh, J.M. (2001) Converting neural signals from place codes to rate codes. Biol. Cybern., 85: 159–165.

Groh, J.M., Kelly, K.A. and Underhill, A.M. (2003) A monotonic code for sound azimuth in primate inferior colliculus. J. Cogn. Neurosci., 15: 1217–1231.

Groh, J.M. and Sparks, D.L. (1992) Two models for transforming auditory signals from head-centered to eyecentered coordinates. Biol. Cybern., 67: 291–302.

Groh, J.M. and Sparks, D.L. (1996a) Saccades to somatosensory targets. I. behavioral characteristics. J. Neurophysiol., 75: 412–427.

Groh, J.M. and Sparks, D.L. (1996b) Saccades to somatosensory targets. II. motor convergence in primate superior colliculus. J. Neurophysiol., 75: 428–438.

Groh, J.M. and Sparks, D.L. (1996c) Saccades to somatosensory targets. III. eye-position-dependent somatosensory activity in primate superior colliculus. J. Neurophysiol., 75: 439–453.

Groh, J.M., Trause, A.S., Underhill, A.M., Clark, K.R. and Inati, S. (2001) Eye position influences auditory responses in primate inferior colliculus. Neuron, 29: 509–518.

Guo, K. and Li, C.Y. (1997) Eye position-dependent activation of neurones in striate cortex of macaque. Neuroreport, 8: 1405–1409.

Gutfreund, Y., Zheng, W. and Knudsen, E.I. (2002) Gated visual input to the central auditory system. Science, 297: 1556–1559.

Hartline, P.H., Vimal, R.L., King, A.J., Kurylo, D.D. and Northmore, D.P. (1995) Effects of eye position on auditory localization and neural representation of space in superior colliculus of cats. Exp. Brain Res., 104: 402–408.

Ingham, N.J., Hart, H.C. and McAlpine, D. (2001) Spatial receptive fields of inferior colliculus neurons to auditory apparent motion in free field. J. Neurophysiol., 85: 23–33.

Itaya, S.K. and Van Hoesen, G.W. (1982) Retinal innervation of the inferior colliculus in rat and monkey. Brain Res., 233: 45–52.

Jay, M.F. and Sparks, D.L. (1984) Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature, 309: 345–347.

Jay, M.F. and Sparks, D.L. (1987a) Sensorimotor integration in the primate superior colliculus. I. Motor convergence. J. Neurophysiol., 57: 22–34.

Jay, M.F. and Sparks, D.L. (1987b) Sensorimotor integration in the primate superior colliculus. II. Coordinates of auditory signals. J. Neurophysiol., 57: 35–55.

Jenkins, W.M. and Masterton, R.B. (1982) Sound localization: effects of unilateral lesions in central auditory system. J. Neurophysiol., 47: 987–1016.

Knudsen, E.I. and Konishi, M. (1978) A neural map of auditory space in the owl. Science, 200: 795–797.

Lewald, J. (1997) Eye-position effects in directional hearing. Behav. Brain Res., 87: 35–48.

Litovsky, R.Y., Fligor, B.J. and Tramo, M.J. (2002) Functional role of the human inferior colliculus in binaural hearing. Hear. Res., 165: 177–188.

Mascetti, G.G. and Strozzi, L. (1988) Visual cells in the inferior colliculus of the cat. Brain Res., 442: 387–390.

Masterton, R.B. (1992) Role of the central auditory system in hearing: the new direction. Trends Neurosci., 15: 280–285.

Mays, L.E. and Sparks, D.L. (1980) Saccades are spatially, not retinocentrically, coded. Science, 208: 1163–1165.

McAlpine, D., Jiang, D. and Palmer, A.R. (2001) A neural code for low-frequency sound localization in mammals. Nat. Neurosci., 4: 396–401.

Metzger, R.R., Mullette-Gillman, O.A., Underhill, A.M., Cohen, Y.E. and Groh, J.M. (2004) Auditory saccades

from different eye positions in the monkey: implications for coordinate transformations. J. Neurophysiol., 92: 2622–2627.

Moore, D.R., Hutchings, M.E., Addison, P.D., Semple, M.N. and Aitkin, L.M. (1984a) Properties of spatial receptive fields in the central nucleus of the cat inferior colliculus. II. Stimulus intensity effects. Hear. Res., 13: 175–188.

Moore, D.R., Semple, M.N., Addison, P.D. and Aitkin, L.M. (1984b) Properties of spatial receptive fields in the central nucleus of the cat inferior colliculus. I. Responses to tones of low intensity. Hear. Res., 13: 159–174.

Paloff, A.M., Usunoff, K.G., Hinova-Palova, D.V. and Ivanov, D.P. (1985) Retinal innervation of the inferior colliculus in adult cats: electron microscopic observations. Neurosci. Lett., 54: 339–344.

Peck, C.K., Baro, J.A. and Warder, S.M. (1995) Effects of eye position on saccadic eye movements and on the neuronal responses to auditory and visual stimuli in cat superior colliculus. Exp. Brain Res., 103: 227–242.

Pouget, A., Deneve, S. and Duhamel, J.R. (2002a) A computational perspective on the neural basis of multisensory spatial representations. Nat. Rev. Neurosci., 3: 741–747.

Pouget, A., Ducom, J.C., Torri, J. and Bavelier, D. (2002b) Multisensory spatial representations in eye-centered coordinates for reaching. Cognition, 83: B1–B11.

Pouget, A. and Snyder, L.H. (2000) Computational approaches to sensorimotor transformations. Nat. Neurosci., 3(Suppl.): 1192–1198.

Russo, G.S. and Bruce, C.J. (1994) Frontal eye field activity preceding aurally guided saccades. J. Neurophysiol., 71: 1250–1253.

Russo, G.S. and Bruce, C.J. (1996) Neurons in the supplementary eye field of rhesus monkeys code visual targets and sac-

323

cadic eye movements in an oculocentric coordinate system. J. Neurophysiol., 76: 825–848.

Semple, M.N., Aitkin, L.M., Calford, M.B., Pettigrew, J.D. and Phillips, D.P. (1983) Spatial receptive fields in the cat inferior colliculus. Hear. Res., 10: 203–215.

Snyder, L.H., Grieve, K.L., Brotchie, P. and Andersen, R.A. (1998) Separate bodyand world-referenced representations of visual space in parietal cortex. Nature, 394: 887–891.

Sparks, D.L. (1989) The neural encoding of the location of targets for saccadic eye movements. J. Exp. Biol., 146: 195–207.

Squatrito, S. and Maioli, M.G. (1996) Gaze field properties of eye position neurones in areas MST and 7a of the macaque monkey. Vis. Neurosci., 13: 385–398.

Stricanne, B., Andersen, R.A. and Mazzoni, P. (1996) Eyecentered, head-centered, and intermediate coding of remembered sound locations in area LIP. J. Neurophysiol., 76: 2071–2076.

Trotter, Y. and Celebrini, S. (1999) Gaze direction controls response gain in primary visual-cortex neurons. Nature, 398: 239–242.

Werner-Reiss, U., Kelly, K.A., Trause, A.S., Underhill, A.M. and Groh, J.M. (2003) Eye position affects activity in primary auditory cortex of primates. Curr. Biol., 13: 554–562.

Weyand, T.G. and Malpeli, J.G. (1993) Responses of neurons in primary visual cortex are modulated by eye position. J. Neurophysiol., 69: 2258–2260.

Yamauchi, K. and Yamadori, T. (1982) Retinal projection to the inferior colliculus in the rat. Acta Anat. (Basel), 114: 355–360.

Zwiers, M.P., Versnel, H. and Van Opstal, A.J. (2004) Involvement of monkey inferior colliculus in spatial hearing. J. Neurosci., 24: 4145–4156.

Subject Index

acoustic 250–252, 261, 274, 276, 291, 313–314 adaptation 58, 74, 88, 93, 96–99, 101–103, 106, 138, 181,

218, 236, 274

aftereffects 93, 97, 99, 106, 274 afterimages 93, 96, 98–99 anomalous experience 259

attention 15, 26, 33, 37, 40, 49, 51–52, 59, 82, 100–101, 125–126, 133–137, 140, 147, 149–152, 154, 157–158, 161, 164–167, 169–171, 173–174, 207, 209, 228, 236, 238, 243, 248–249, 251–256, 259, 264–267, 273, 279, 282–283, 313

audition 243–245, 247, 249–250, 252–254, 256, 273, 278, 314, 321

auditory 37, 43–44, 243–256, 261–263, 273–274, 276–284, 287–288, 291–295, 298, 300, 304–308, 313–315, 317, 319

awareness 39, 44, 49, 100, 125–126, 137, 139–140, 177–179, 190, 192–194, 197, 201–203, 208–209, 228–230, 267, 275

binocular rivalry 125–126, 137–140, 177, 192, 208, 235–238

bird vision 49

blindness 93, 97, 99–101, 106, 217, 219–221, 223–224, 228, 230, 263, 288, 304, 306

blindsight 43, 217, 220–230

body 37–41, 44, 51–61, 72, 93, 104, 222, 230, 255–256, 264, 266, 268, 302, 314

camouflage 49–51, 55–57, 62, 82 color induction 93, 99

consciousness 39, 177–178, 190, 209, 235, 260–261

context 3–4, 6–18, 33–34, 37–42, 44, 51, 67, 76, 109–110, 118–121, 177, 179, 217, 222, 225, 253, 314

contour perception 49 contrast response 125, 132

critical period 287, 289, 294, 308

crossmodal 43, 221, 243, 245–246, 252–253, 255–256, 259, 261–263, 268, 273–276, 278–279, 281–284, 287–288, 304, 306, 308

crossmodal perception 259, 261–262 crossmodal integration 243, 256, 275, 283 crypsis 49–50, 55–56, 58–60, 63

disruptive coloration 49, 51–63

electrophysiology 88, 177, 253

fading and filling-in 67, 82–83, 90 fear 37, 41–42

feedback 3, 34, 39, 41, 82, 119, 126, 133, 136–137, 139–140, 151–153, 191–192, 194, 202, 204–205, 207–209, 223, 225, 228, 261–262, 273, 275–279, 282–283

figure-ground segregation and grouping 67–68 filling-in 41, 67–68, 76, 82–86, 90, 93, 97–99, 106,

109–115

flash-lag effect 243, 248–250 flicker response 125

fMRI 5–10, 12–13, 18, 27, 40–42, 44, 125–126, 129–135, 137–139, 154, 177, 192, 208–209, 235, 237–238, 261, 264, 306

Gestalt 67, 76–77, 79–82, 86–88, 90, 109–110, 199–200 gist 4, 23, 25, 31–34, 41–42

global image feature 23–27, 33

human 23, 27, 31, 33, 40–43, 56–57, 61, 67, 69–70, 73–75, 81–83, 90, 94, 110, 112, 121, 125–129, 131–134, 138, 140, 165, 174, 177, 182–183, 186, 189, 191–192, 206, 208–209, 219, 221–222, 226, 228, 230–231, 253, 255, 261–262, 264, 268, 273, 284, 288, 308

illusion 67, 72, 76, 93, 106, 110, 177, 179, 181, 192, 199, 206–207, 275

inferior colliculus 306, 313, 315

invisibility 99, 177–178, 187–188, 199, 206, 209

leucotomy 235–236

lightness 49, 109–110, 112, 115, 117–119, 259, 263 long-range color interaction 67

low spatial frequencies 3, 5, 26, 42

magnoand parvocellular LGN 125

map 5, 68, 125, 127, 153, 157, 173–174, 194, 202, 247, 313, 315

325

326

monkey 40, 43, 67, 72, 74–76, 79–83, 88, 104, 133, 139, 148–150, 152, 157–171, 173–174, 177, 186–190, 198, 201, 207, 217, 223, 302, 307, 313, 315, 318–319

motion 39, 42, 50, 68, 72, 81–82, 88, 90, 93, 95, 97, 99–101, 104–106, 109, 174, 177, 199, 206, 209, 218, 220, 222–225, 227, 229, 231, 236, 247–250, 261, 273–284, 314

multisensory 43–44, 247, 256, 259, 273–276, 283–284, 287, 306, 313

multisensory integration 259, 273–275, 283, 313

N170 37, 39–40, 42, 44, 266 natural image 23, 26–27, 29, 149 natural vision 109, 119

neurophysiological correlates of perception 67 neuropsychology 88, 217, 253

number forms 259, 266–268

object perception 37, 49

object recognition 3–8, 10–18, 24, 33, 37, 42, 50, 147 optical imaging 177, 199

orbitofrontal cortex 3, 5–6, 18, 38, 44

P1 37, 39–41, 44, 224–226 parahippocampal cortex 3, 7–8, 18, 41, 44

parallel 17, 23, 33, 39, 81, 103, 120, 136, 147–151, 153–154, 177, 182, 187, 276, 283, 293, 295

perception 4, 7–8, 11, 14–15, 23, 25–26, 37–43, 49, 52, 57–58, 67–73, 76, 79, 81–82, 86–87, 89–90, 109–110, 112, 114, 125–126, 140, 152, 154, 174, 177–182, 188, 190, 192–194, 200, 206, 208–209, 220, 228, 235–236, 238, 243, 247, 259, 261–262, 264–265, 268, 273–275, 284, 288, 313

perceptive fields 67, 70, 72–75, 86, 94, 126 perceptual alternation 138, 235, 237 place code 313–315, 317–320

plasticity 93, 223, 228, 230, 287–288, 304, 306, 308

prefrontal cortex 3–4, 6, 17–18, 150–151, 235–237

primary visual cortex see V1 primate see monkey

priming 3, 12–15, 18, 26, 33, 209

psychophysics 28, 60, 63, 67, 69–70, 88–89, 93, 121–122, 133, 164, 170, 177, 182, 185, 199, 236

rate code 313–315, 317–321 retinotopy 125, 193

retrosplenial cortex 3, 7, 34, 41, 44

saccades 119–120, 147–148, 151, 157, 159, 161, 163, 169, 174

scene 3, 6–7, 11–17, 23–28, 30–34, 37, 39, 41–42, 44, 49–50, 101, 106, 109, 119–122, 147, 287

scene recognition 23, 25–27, 31, 33

selection 50–51, 54–55, 58, 147–148, 151–154, 157, 253, 283

selective attention 26, 125, 133–135, 137, 140, 207, 279, 283

serial 147–148, 151, 153 sound localization 313, 319 sound location 313–321 spatial envelope 23, 31–34

spatial frequency 4, 6, 23, 25–27, 29, 41, 84, 87, 113–114, 133

standing wave 177, 187–188, 209 striate cortex see V1

synchrony 5–6, 147, 150, 152, 154, 248, 251, 278–280, 282

synesthesia 259–268

top-down 3–6, 11, 14–18, 82, 126, 133, 135, 137, 148, 150, 209, 236, 273, 275–276, 278, 281, 283–284

training 10, 217, 223, 225–226, 228–230, 277, 279

V1 39, 73, 76, 80–81, 85, 109, 112, 116–121, 126, 128–129, 131–140, 149, 182, 186–193, 195, 198, 201–203, 207–209, 217–219, 221–223, 227–230, 235, 253, 255, 262–263, 287, 307

V4 28, 131–135, 137, 147–153, 192, 203, 218, 221, 230, 261, 265

ventriloquist effect 243, 245, 247

visibility 68, 177–180, 182–183, 185–186, 188–190, 192–194, 196, 201–203, 206–209

vision 49, 67–68, 70–71, 73–74, 87–90, 93, 104–105, 109–110, 112, 119, 147, 158, 177–178, 199–200, 203, 207, 217–223, 226–230, 243, 245–250, 252–254, 256, 259, 261, 263, 268, 273, 276, 279, 288, 306, 314, 318, 321

visual associations 3 visual field recovery 217

visual search 15, 103, 147–154, 265 visually deprived 287–290, 294–295, 299,

307–308

voice 37–39, 43–44, 243, 253