- •Contents
- •Preface
- •Acknowledgments
- •1 Introduction
- •Vision and experience
- •Vision and natural science
- •Form vision
- •Visual illusions
- •2 Optics
- •Light
- •Geometrical optics
- •Imaging in the eye
- •3 Physiology of the eye
- •The evolution of eyes
- •The eye is not a camera
- •The optic media
- •The retina
- •Signal generation
- •4 Sensitivity and response
- •Psychophysical sensitivity
- •Vision in daylight and in the dark
- •Linear and nonlinear response
- •Spectral sensitivity
- •Response
- •Adaptation of cones
- •Photometry
- •Contrast vision
- •Vision loss
- •5 Color
- •Color order systems
- •The physics of color stimuli
- •Color differences
- •Color induction and adaptation
- •6 Color vision
- •Color between phenomenon and theory
- •Thomas Young or George Palmer?
- •Young–Helmholtz’s three-receptor theory
- •Hering’s opponent colors theory
- •The retinex theory
- •Color in current neuroscience and neurophilosophy
- •Defective and normal color vision
- •Limitations of the three-receptor theory of color vision
- •Opponency and an opponent ‘color code’
- •Correlates of related and unrelated colors
- •Antagonistic receptive fields of opponent cells
- •Spectral sensitivity and response
- •The opponent model and color perception
- •Summary
- •7 Neural correlates
- •Neural representations
- •Class A and class B observations
- •B- and D-types of cells
- •Psychophysics and the parallel pathways
- •8 Brain processes
- •Cortical organization and vision
- •Visual centers and areas
- •Higher visual areas
- •The binding problem
- •Mirror neurons
- •The ‘split brain’
- •Localization of brain activity: methods
- •Visual pathways and clinical investigation
- •Cortical visual impairment
- •Appendix
- •Glossary
- •References
- •Index
7Neural correlates
Lowand high-level neural correlates
Throughout this book, there have been several references to neural coding of visual attributes and to the co-variation or correlation of perceptual properties and neural responses. Before we go on to present some more data on such correlations, we need to consider in more detail what correlation and co-variation implies. We shall discuss some of the principles and hypotheses that link neural activity to perception. There are several general issues that should be addressed. For instance, what are the relevant coding principles and strategies? How does the brain represent the visual world? Another, more specific question is: how does the brain represent objects and object properties, such as movement and color?
In an answer to such questions, Horace B. Barlow (1972) postulated that
. . . perceptions are caused by the activity of a rather small number of neurons selected from a very large population of predominantly silent cells. The activity of each single cell is thus an important perceptual event and it is thought to be related quite simply to our subjective experience . . . . A description of that activity of a single nerve cell which is transmitted to and influences other nerve cells, and of a nerve cell’s response to such influences from other cells, is a complete enough description for functional understanding of the nervous system.
This neuron doctrine has led to great advances in the study of the function of single cells. In vision, the idea that the function of the nervous system can best be described at the level of single cells has gained support from a series of experiments on the detection of object properties using weak stimulation, close to visual threshold. Comparisons of psychophysical and neural sensitivity to several dimensions of light and color indicate that, for a particular task, it is the most sensitive cells that determine psychophysical threshold.
Light Vision Color. Arne Valberg
# 2005 John Wiley & Sons Ltd
344 |
NEURAL CORRELATES |
The alternative, and more general hypothesis, that objects and their properties are represented by the activity of an ensemble, or a network of cells (that can be distributed over several areas of the brain), is attractive for more complex stimuli and for stimulus intensities above threshold which engage more cells. Generally, the most striking support for Barlow’s hypothesis has been provided by comparison of psychophysical sensitivity with the threshold sensitivity of peripheral neurons, for example, in the retina. Cortical recordings have been more successful in finding correlates to higher level functions, where some form of convergence and distributed processing is likely.
Neural representations
As we shall see, different parts of the brain form specialized areas and handle different kinds of visual information. Recent studies using functional magnetic resonance imaging (fMRI) have shown that many of the locations that respond during active vision also are active when memorizing the same events (e.g. in mental imagery with closed eyes).
Neurons at and after the ganglion cell level in the retina, respond to varying stimulus magnitudes with a sequence of discrete and identical action potentials (spikes), and they are generally more selective and stimulus-specific than retinal cells. These neurons respond to optical stimulation related to certain stimulus properties, and their firing rate in impulses/s changes with stimulus specificity, intensity and contrast. Activation is balanced by inhibition, and adaptation and habituation appear to prevent the cells from being excessively stimulated. We have indicated the usefulness of applying detection and discrimination sensitivity as a means of comparing psychophysics and physiology. For a reliable physiological measure of sensitivity we must consider the cell’s responsiveness to a particular stimulus and determine the change in firing rate in response to a stimulus increment or decrement. The noisiness of the cell and the level of the maintained discharge must also be considered when defining a threshold criterion.
One may ask if the number of nerve pulses per unit time is a complete and sufficient description of the neural code, or whether other forms of information transfer between nerve cells are possible? The traditional view is that a cortical neuron, for instance, is an integrate-and-fire device. However, the time interval between successive spikes from one cell and the temporal correlation of action potentials from different cells can also be considered possible sources of information. Synchrony between spikes or impulse trains from different cells (and cell assemblies) might be registered by neural elements serving as coincidence detectors. However, at the level of single cells in the retina and in the geniculate it looks like firing rate is a fairly optimal code. At the higher levels of more composite and organized neural networks, it seems that additional means of information transfer are possible.
NEURAL REPRESENTATIONS |
345 |
A distinction is often made between local and parallel distributed processing – or between local representation and vector coding. In the field of artificial intelligence (AI) there has been a great deal of attention to the properties of so-called ‘neural networks’ and their achievements in pattern recognition. These theories have also influenced our thinking about brain processes. Neural networks consist of an assembly of interconnected units, and they can be used to simulate processes in simple biological systems and to test theories of how parts of the nervous system interact. Groups of related neurons in the brain can be thought of as equivalent to local processing units. The functional state and the output of a neural net depend on the weights allocated to the interconnections between the different elements. The weights determine the relative contribution of each of the individual elements to the total response. These weights remind us of the Hebbian synapse (Hebb, 1949) between nerve cells, i.e. of connections that are strengthened by particular stimulus or activation patterns and frequent use. Plasticity can be represented by changing weights within a network, resulting in another input vector to output vector transformation. Whether or not this theory applies to the brain, the concept of an abstract multidimensional vector representing an object, instead of single elements, is central to distributed representation. This idea has proven extremely successful in image analysis and pattern recognition. The use of several dimensions allows us to define different n-dimensional feature spaces (Figure 8.9), and the use of numerical values (i.e. a set of numbers signifying magnitudes) allow a simple definition of similarity within a certain feature domain and between feature domains. In biological networks, with different areas of the brain being active simultaneously by composite stimuli, this leads to the so-called ‘binding problem’ of synchrony of integrated information distributed over several feature spaces (see Chapter 8).
The representation of the excitation of three types of cone in a three-dimensional color space is a particularly simple example of a sensory vector space. At the level of retinal ganglion cells, cone outputs are linearly transformed into another, opponent color vector space with cardinal axes, which is retained in the LGN and is further transformed in area V1. In this particular case, the early cone transformations lead to individual ganglion cells representing the resultant vectors.
At a subcortical level, a wealth of data has confirmed the idea of local coding, i.e. that correlates of stimulus attributes can be found at the level of single cells, as Barlow (1972) suggested. Several examples will be given later. It is central in Barlow’s dogma that ‘there is nothing else looking at what the cells are doing, they are the ultimate correlates of perception’ (Marr, 1982). Let us use color as an example: local coding would imply that there is a particular, specialized cell type with a narrow spectral bandwidth, which responds in conjunction with the perception of a certain color, e.g. an ‘orange coding cell’. Another type of cell would respond to a similar, but slightly different orange color (e.g. carrot orange vs apricot orange). Such local hue coding would be analogous to orientation coding with highly orientation-selective cells such as one finds in area V1 (see Chapter 8), where different cells respond to
346 |
NEURAL CORRELATES |
different orientations with a narrow angular resolution (of about 10 ). However, is this feasible in the case of color?
Imagine an alternative form of orientation coding similar to our postulated lowlevel hue coding using cardinal axes. It would utilize combinations of the relative responses of two populations of neurons sensitive to orthogonal orientations, e.g. one for the vertical and one for horizontal (these orientations being defined with respect to the retina and not to the outside world). Orientation selectivity might then be coded by the relative responses of orthogonal groups of cells, e.g. 45 would correspond to equal inputs from both groups.
The first principle of early local coding combined with convergence would imply the existence of a hierarchical system of distinct cell types for every relevant feature. In the case of face recognition one would, for example, assume the existence of a central locus (the so-called ‘grandmother cell’) that integrates the particular features of your grandmother (her eyes, ears, nose, mouth, etc.). This hypothetical cell would be activated only by the presence of the right shapes and the right combination of these features. The concept of signals from local processing units converging on a central processing site, giving rise to extreme stimulus selectivity in higher level single cells, arose after Hubel and Wiesel first used a hierarchical model to explain orientation selectivity. Such hierarchical organization seems, for instance, to underlie the strange cases of ‘face blindness’ (prosopagnosia; see the Chapter 8) where the afflicted person cannot even recognize the face of close family members. Although this hierarchical model has played a certain role in the history of neuroscience, most neuroscientists have found the concept of a specific cell for every combination of stimulus features to be unacceptable as a general principle. A strategy of a different cell responding to every possible combination of nuances of general attributes, such as color, shape, structure, position, distance, movement, etc., would require an unrealistically huge number of specific cells to ensure that no nuance would be missed. For color alone, one would need between 5 and 10 million neurons since this is the number of shades that can be discriminated under optimal conditions. An alternative explanation in terms of distributed processing would imply that prosopagnosa somehow results from a disturbance of the established connections and weights between neural elements within a neural network.
In the context of color representation, vector coding could mean that the attributes of a color, e.g. orange (its hue, saturation and lightness), depend on the relative activity of different, cardinal cell types. It could, for instance, depend on the response ratio between ‘L–M’ and ‘M–S’ cells in the LGN, or on the ratio of the inputs of these cells in area V1, or later. In associating vector coding with parallel distributed processing and neural nets, we have not addressed the question of whether there are cells at a higher level that detect this relation. In the next chapter we shall see that, for colorcoding in area V1, recent evidence indicates a coding strategy with hue-selective cells for many more directions than in the LGN (although not with a narrower wavelength tuning). This suggests that the vector coding at lower levels might have been transformed into local coding in V1, or to vector coding with a greater number of base vectors.
