Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Ординатура / Офтальмология / Английские материалы / Computational Maps in the Visual Cortex_Miikkulainen_2005

.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
16.12 Mб
Скачать

346 16 Discussion: Biological Assumptions and Predictions

16.1.1 Recurrent Lateral Interactions

Perhaps the most important difference between LISSOM and other self-organizing map models is the settling of activity through recurrent lateral interaction. This process affects self-organization by modifying the activation patterns in two ways:

1.It concentrates activity in the maximally active regions of the network, suppressing activity elsewhere.

2.It decorrelates activation across the network through inhibitory lateral connections.

The first effect generalizes the winner-take-all process of SOM and other abstract models. Instead of finding one winner and adapting the neurons in a single continuous neighborhood around it, recurrent interaction selects a set of maximally active regions. This soft winner-take-all process is necessary for afferent connections to self-organize efficiently: Had there been no such process, all neurons would adapt for all inputs and the afferent weights would become the same for all neurons with the same anatomical RF. The explicit lateral interactions also eliminate the search mechanism for finding the winner which is used in SOM, and make it possible for all neurons to compute in parallel.

The second effect, decorrelation, is crucial for efficient coding of input. By decorrelating neural activity to the same extent that they are known to be correlated, the redundancy of the cortical activity is reduced most efficiently, as was discussed in Section 14.2. Without such recurrent lateral interactions, the neurons would act much like linear filters; with lateral interactions, the network activity is concentrated in a set of best-responding neurons. In this way, the input image is represented as a sparse coding of the primary visual features.

The cortical plasticity results of Chapter 6 also depend crucially on the lateral interactions. For example, the dynamic changes in the RF sizes of unstimulated neurons occur because there is less lateral inhibition from the surround. In the same way, the perceptual shift observed after the dynamic RF experiment results from reduced lateral inhibition. The extent of lateral excitation also determines the range over which cortical neurons can adapt and compensate for lesions.

In addition to self-organization, the recurrent lateral interactions affect the visual function of the map. They result in the tilt aftereffect discussed in Chapter 7, and possibly other aftereffects and illusions as well. They modulate synchronization across spatially separate regions, thus contributing to perceptual grouping as described in Part IV. Such functional aspects of recurrent lateral interactions will be discussed in Sections 16.3, 16.4, and 17.2.

16.1.2 Adapting Lateral Connections

As has been shown using the SOM and other self-organizing models, afferent receptive field structures such as those for ocular dominance and orientation can selforganize even with fixed lateral interactions. Similarly, receptive fields can be dynamic and cortex can reorganize after retinal lesions even in models with recurrent,

16.1 Self-Organization

347

but non-adapting lateral interactions. What role do the adapting lateral connections serve?

As was discussed in Section 14.2, self-organized inhibitory long-range lateral interactions are most important for eliminating redundant activity and coding visual input efficiently. Self-organization produces a variety of receptive fields for each retinal location, and the RFs are organized in a smoothly varying fashion across the cortex. Therefore, each input causes initial activity in many neurons, and most of this activity is redundant. An efficient coding can be achieved by retaining activity in only those units that are best tuned to the features of that retinal region. Such an encoding can be achieved by decorrelating activity through lateral connections, making the feature representations of the visual input across the cortex more independent. Therefore, lateral inhibition is necessary between the receptive fields, and its strength should be organized according to the correlations between them. This type of organization is what LISSOM achieves by adapting the inhibitory long-range lateral connections. Interestingly, perceptual phenomena such as the tilt aftereffect emerge as a side effect of this process.

On the other hand, adapting long-range lateral excitation is crucial for perceptual grouping. The correlations they learn implement the Gestalt principles that allow the network to decide which elements in the input should be bound together into a coherent object. Because maps are locally smooth, it is not as important to adapt the short-range excitatory connections, although such adaptation also helps improve the efficiency of coding. These connections cumulate the activity of nearby units, amplifying their responses. This process should also depend on activity correlations between neurons: Similar neurons should contribute more excitation than those that are dissimilar. If the lateral excitatory connections are also self-organized, the weighting of lateral activity will be matched to these correlations. Such correctly weighted excitation will produce appropriately sized activity bubbles, and minimum spurious activity.

Importantly, lateral connections adapt synergetically with the afferent connections. The afferent organization determines the initial pattern of activity, and the afferent and lateral organizations together determine the final pattern after settling. The settled patterns in turn determine the weight changes by the Hebbian rule. If one of these connection types were to be fixed while the other type develops, the resulting connection patterns would be different. Therefore, the afferent and lateral connections adapt together in LISSOM, and form matching structures.

In biology, it is not yet clear whether afferent and lateral connections develop in a similar synergetic fashion. However, experimental evidence suggests that they develop at least about the same time in mammals. In the cat visual cortex, for example, lateral connections proliferate exuberantly and rapidly elongate in the first postnatal week, but they do not grow very much afterward (Callaway and Katz 1990; Katz and Callaway 1992). After the first week, the connections slowly refine into clusters by synaptic elimination and reach an adult-like organization at the end of 6 weeks. Simultaneously, afferent connections organize into ocular dominance and orientation columns: Rough ocular dominance and orientation columns are visible from about 2 to 3 weeks after birth and are adult-like also at about 6 weeks. These observations

348 16 Discussion: Biological Assumptions and Predictions

suggest that in the cat neocortex, a rough lateral connection structure emerges first, bootstraps self-organization, and gradually gets refined into connections that selectively associate neurons with similar properties. However, establishing the details of this process will require additional experiments, both in cats and in other mammals such as primates.

16.1.3 Normalization of Connections

An important part of Hebbian learning is a regulatory process that keeps the connection weights from increasing without bounds (Section 3.3). In LISSOM, the different kinds of connections are assumed to be regulated independently and multiplicatively.

The different connection types must be adapted independently because they self-organize from different types of activity correlations. The afferent connections learn correlations between the cortex and the receptors, the short-range lateral excitatory connections learn correlations between near neighbors within the cortex, the long-range inhibitory connections learn redundancies between distant neurons, and the long-range excitatory connections (in PGLISSOM) learn correlations within coherent objects. If the connection weights are all normalized together, these different types of correlations will influence all the weights, and interfere with selforganization.

As was discussed in section 3.3, there are two common ways to normalize the synaptic weights in self-organizing models. In LISSOM, the total weight of each type of connection is kept constant multiplicatively: After the weights are adapted, each weight is scaled by the total weight. An alternative way would be to normalize subtractively: After the weights are adapted, the increase in total weight divided by the number of weights is subtracted from each weight (e.g. Goodhill 1993; Miller et al. 1989). Subtractive normalization would not work well in LISSOM. The reason is that it always results in some of the weights increasing to a maximum value and others decreasing to zero: No intermediate weight values develop (Miller and MacKay 1994). Therefore, lateral connections will not store the precise correlations between neurons, and afferent connections will not develop precise representations of the input features. Furthermore, for stability, synaptic weights become fixed once they reach their maximum values. Therefore, gradual reorganization such as that observed with retinal and cortical lesions cannot take place. Such representations would not be as useful in visual coding and processing as the precise, continuous weights obtained through multiplicative normalization.

The form of the multiplicative normalization used is not crucial: When the inputs are relatively regular and laid out on a retina, either the constant sum of weights or the constant vector length normalization can be used (Section 14.4). Self-organization also works similarly whether the normalization is done postsynaptically, i.e. over incoming connections as in the firing-rate LISSOM models (Section 4.4.1), or presynaptically over outgoing connections as was done in PGLISSOM (Section 11.4). As long as the normalization is done separately for each weight type, suitable parameters can be found for either case, and organized receptive fields and lateral interactions will develop.

16.1 Self-Organization

349

However, the site of normalization is important for grouping: Presynaptic normalization makes it easier for the model to segment different objects. In this case, the postsynaptic cell receives inputs through weights that are each scaled differently, according to the outgoing weights of each presynaptic cell. Even relatively low activity can result in a large weight, and the postsynaptic cell can be more sensitive to small changes in the input. In segmentation tasks, small differences in the activation levels must be magnified, and presynaptic normalization makes this process easier.

In postsynaptic normalization, all incoming weights are scaled by the same value. The inputs are treated more equally than in presynaptic normalization, and the behavior of the neuron becomes slightly more stable. This property makes postsynaptic normalization preferable for models that do not include grouping. Biological data to date do not rule out either form of normalization, and they could even coexist. Computationally, connection weights could be modeled as a product of two factors, the postsynaptic and the presynaptic weight, each normalized separately (Leow 1994; Leow and Miikkulainen 1997). The different normalization processes could interact, and depending on the input, one or the other might dominate; they could also be specific to only the excitatory or inhibitory synapses. Future research, both experimental and computational, is necessary to verify the precise form of normalization in biological systems.

16.1.4 The Role of Excitatory and Inhibitory Lateral Connections

In order for a LISSOM network to self-organize, the net lateral interactions between strongly responding units must be inhibitory at long ranges and excitatory at short ranges. Such lateral interactions are essential for concentrated activity bubbles to form and for self-organization to take place (Section 4.2.3). They are also a key ingredient common to most self-organizing models (Section 3.4.1; Miller 1994; Miller et al. 1989; von der Malsburg 1973).

The original biological inspiration for such interactions comes from the neural architecture of the retina, where long-range inhibition is well established. In the retina, lateral inhibition enhances contrast, especially at edges and boundaries of objects. Such interactions have been shown to produce an efficient coding of the retinal image, decorrelating and reducing redundancies in the photoreceptor activities (Atick 1992; Atick and Redlich 1990). Numerous researchers have proposed that lateral inhibition is a general principle of perceptual systems, and may occur similarly in the cortex (e.g. Blakemore et al. 1970).

Measurements of the activity levels of strongly stimulated cortical neurons indeed support the idea of long-range lateral inhibition and local excitation in the cortex (Grinvald et al. 1994; Sceniak, Hawken, and Shapley 2001). For instance, Grinvald et al. (1994) performed optical imaging experiments visualizing large-scale cortical activity. The responses to two stimuli were compared: a surround stimulus consisting of a high-contrast grating with a square hole (or mask) at the center, and a center stimulus consisting of three small high-contrast bars that fit within the masked region. When the surround and center stimuli were presented together, the center region was substantially less active than when the center stimulus was presented alone,

350 16 Discussion: Biological Assumptions and Predictions

indicating that the surround was inhibiting the center area. Similarly, mapping excitatory and inhibitory regions using high-contrast sine gratings shows that surround influences tend to be excitatory locally but inhibitory at longer ranges (Sceniak et al. 2001).

However, the long-range interactions in the cortex are more complex than the above experiments might suggest. Anatomical surveys show that 80% of the synapses of long-range lateral connections connect directly between pyramidal cells, which are thought to make excitatory synapses only (Gilbert et al. 1990; Hirsch and Gilbert 1991; Kisvarday´ and Eysel 1992; McGuire et al. 1991). The other 20% of the connections target inhibitory interneurons, which in turn contact the pyramidal cells, and thus represent inhibitory connections. Even though the inhibitory connections are outnumbered, the net effect at the columnar level has been difficult to establish with anatomical studies. For instance, the interneurons often synapse at regions such as the soma, where their effects may be larger than those of excitatory neurons, which synapse farther out on the dendrites (Gilbert et al. 1990; McGuire et al. 1991). Thus, the known anatomy is compatible with both long-range excitation and long-range inhibition.

Electrophysiological evidence indicates that in fact the same connections can have either excitatory or inhibitory effects, depending on how strongly neurons are activated (Hirsch and Gilbert 1991; Weliky et al. 1995; see Angelucci, Levitt, and Lund 2002 for a review). The balance between these two types of connections depends on image contrast: The incoming lateral connections of a neuron have a mildly excitatory influence when the surrounding area is activated weakly (as it would be by a low-contrast stimulus) and a strongly inhibitory effect when the surround is activated strongly (as it would be by a high-contrast stimulus; Hirsch and Gilbert 1991; Weliky et al. 1995). Thus, for high contrast stimuli, as in the Grinvald et al. (1994) study, the interactions are usually inhibitory, even though the anatomical connections are primarily between excitatory neurons.

The details of the cortical circuit implementing this contrast dependence remain unclear. One early proposal was that the inhibitory interneurons are inherently more effective than the direct excitatory connections, but have a higher threshold for activation (Sillito 1979). At very low stimulus levels, the excitatory effects would predominate, but at high levels the inhibitory interneurons would become progressively more active and eventually would suppress the response of the target cell. More recently, Douglas, Koch, Mahowald, Martin, and Suarez (1995) proposed a detailed circuit based on recurrent short-range excitatory lateral connections. They showed how the inhibitory connections can dominate the response even though they are fewer in number. Simplified versions of such circuits have been modeled by Stemmler et al. (1995) and Somers et al. (1996). They propose that these complex connections make it easier to detect weak, large-area stimuli while suppressing spatially redundant activation for strong stimuli. Figure 16.1 shows one such circuit that could give rise to contrast-dependent effects.

The two-layer (SG) model of cortical columns in PGLISSOM can be seen as a column-level abstraction of such circuits. In PGLISSOM, SMAP has long-range inhibition and short-range excitation and drives the self-organizing process; in GMAP,

16.1 Self-Organization

351

Inhibitory cell

Excitatory lateral connection

Fig. 16.1. Local microcircuit for lateral interactions. This circuit can potentially explain how lateral interactions can depend on the input contrast. A long-range lateral connection from an excitatory cell contacts two pyramidal excitatory cells (large black triangles) and one inhibitory cell (large circle). The inhibitory cell has a high threshold for activation, but strongly inhibits the pyramidal cells when activated. Weak excitation activates the pyramidal cells monosynaptically, and does not activate the inhibitory cell. However, strong excitation activates the inhibitory cell as well, causing a net inhibitory effect. In this manner, a single incoming excitatory long-range lateral connection could have inhibitory effects for strong stimuli (e.g. high-contrast patterns), and excitatory effects for weak stimuli. The SG model of cortical columns in PGLISSOM produces a similar effect, and can be seen as an abstraction of this circuitry at the columnar level. The excitatory synapses (shown as small triangles) adapt by Hebbian learning, but the inhibitory synapses (shown as small circles) are fixed in strength. Such learning can be approximated by direct Hebbian excitatory and inhibitory connections, as is done in PGLISSOM. Adapted from Weliky et al. (1995).

both connections have long range and implement grouping. When the combined effects of these interactions are measured on a cortical column in PGLISSOM, excitatory effects are found to dominate with low-contrast inputs, and inhibitory effects with high-contrast inputs, as they do in the cortex.

Importantly, self-organization is primarily driven by high-contrast inputs in PGLISSOM, and most likely in animals as well. Low-contrast patterns rarely cause a significant response because of the neurons’ nonlinear activation function. The resulting synaptic changes are small and do not significantly affect the learning process. Thus, the simplifying assumption, common to all LISSOM models, that the long-range lateral interactions are primarily inhibitory during self-organization, is well founded. The GMAP layer can be omitted from models that do not focus on perceptual grouping; the remaining network includes short-range excitation and longrange inhibition, which is the necessary connectivity for proper self-organization to occur.

For computational convenience, the long-range inhibitory interactions are represented in all LISSOM models as direct connections instead of connections through interneurons (such as those in Figure 16.1). Because the interneurons can be brought to firing threshold rapidly and repeatedly without fatigue (Thomson and Deuchars 1994), they introduce only a small delay in the inhibitory process and can be approx-

352 16 Discussion: Biological Assumptions and Predictions

imated functionally by direct connections. Also, while there is no clear evidence for Hebbian strengthening of direct inhibitory synapses in the cortex, the inhibitory effects can be modified through Hebbian strengthening of excitatory synapses onto the inhibitory interneurons. Therefore, direct Hebbian learning is a valid abstraction of adapting lateral inhibition in the cortex, resulting in more parsimonious models with equivalent behavior.

16.1.5 Connection Death

An important component of self-organization in LISSOM is the pruning of unused lateral connections (Section 4.4.2). This process is useful computationally, but it is also well motivated biologically.

More than half of the long-range lateral connections in the neocortex are estimated to disappear during development (Callaway and Katz 1990; Katz and Callaway 1992; McCasland et al. 1992; Purves and Lichtman 1985). In the visual cortex, structured lateral connectivity emerges from an initially unstructured organization after axons projecting to incorrect targets die off (Callaway and Katz 1990). Which connections survive depends on how often they are active. The reason could be that synapses are nourished in proportion to their strength. Once formed, a weak synapse may survive only for a limited time without sufficient trophic factors.

The onset of connection death in LISSOM, td, models this survival time. Synapses whose strength falls below the survival threshold are not eliminated immediately, but only if they stay below the threshold until td. Even in prolonged self-organization, short-term fluctuations in synaptic strength will not cause inappropriate connection death in LISSOM. The connections are pruned at well-spaced intervals ∆td, instead of eliminating them as soon as they become weak. As was seen in Section 14.2.3, the resulting patchy lateral connections are crucial in forming a sparse, redundancyreduced visual code.

Connection death is also important for perceptual grouping. The long-range excitatory connections in PGLISSOM are pruned after training so that only the strong ones remain (Section 11.4). The resulting patchy connectivity represents activity correlations in the input, implementing the Gestalt principles that drive the grouping process. They also make it possible to adapt the lateral interactions dynamically during performance. Since the connectivity is patchy and stable, the strengths can be modulated at a fast time scale without changing their overall effect. Although not strictly necessary for grouping, such fast dynamic adaptation results in more robust synchronization (Baldi and Meir 1990; von der Malsburg 1981, 2003; Wang 1996).

An important side effect of connection death is that it limits how extensively the network can adapt to changes in internal and external inputs. For example, before the connections are pruned, the network can recover function even after relatively large cortical damage, but such plasticity is limited in the pruned adult network (Sections 6.4.2 and 16.4.4). Also, after the connections have been pruned to represent activity correlations in the input, if those correlations change, it will be difficult for the network to adapt, as it will be for animals (Sections 8.1 and 9.4.2). Connection

16.1 Self-Organization

353

death can therefore be seen as a process that makes the computational system more efficient, at the expense of the ability to adapt to changes.

16.1.6 Parameter Adaptation

As was discussed in Sections 4.2.3 and 4.4.3, consistent lateral inhibition is necessary for the self-organizing process, and gradually reducing the excitatory radius and gradually making the neurons more difficult to activate allows forming more regular maps. These mechanisms were included primarily for computational reasons, allowing the maps to self-organize even from very disordered starting points. Biological maps have more order initially, and thus may not require these processes.

However, biological counterparts do exist for the parameter adaptation processes in LISSOM. They represent maturation based on time and trophic factors, and can be used to establish a maturation schedule for LISSOM models independently of inputdriven self-organization. Such maturation allows studying deprivation and critical periods, as reviewed in Section 2.1.4 and implemented in Sections 9.4 and 13.4.

For instance, several lines of evidence suggest that there is more net excitation during early development than later. First, immature neurons are connected by a network of excitatory gap junctions that are not seen in the adult (Sutor and Luhmann 1995). Second, cross-correlation studies in the primary visual cortex of the kitten showed that net lateral excitation extends to distances of 1 mm in the first 2 to 3 weeks (after compensating for cortical growth), and decreases to less than 400 m by the seventh to ninth week (Hata et al. 1993). Third, direct studies of synaptic connections in the ferret visual cortex found that local excitatory synaptic connections increase rapidly in number and extent at the time of eye opening, and subsequently prune down to much more local connectivity (Dalva and Katz 1994). Thus, animal cortex may also have wider excitatory activations in early stages.

To fine tune the LISSOM map, the activation threshold for neurons is gradually raised so that neurons become more difficult to activate. Interestingly, cortical neurons also become harder to trigger electrically as they mature. Immature neurons have higher input resistances, longer time constants and more linear relationships between applied current and voltage than do mature cells (Prince and Huguenard 1988). Thus, older cells require more electrical stimulation to activate. These effects may be due to homeostatic plasticity processes, which tend to normalize the frequency of neuronal firing over time (Turrigiano 1999). That is, immature neurons have RFs that are not yet well developed and are not yet a good match to the statistics of visual scenes, and thus homeostatic mechanisms may lead them to fire more easily (for a given amount of electrical stimulation). Older neurons have well-tuned RFs, and can thus require a good match before responding. In LISSOM, these processes are approximated by gradually raising the sigmoid threshold. Extending LISSOM to include automatic mechanisms for regulating firing probability is discussed in Section 17.1.1.

For simplicity, LISSOM includes constant levels of inhibition throughout the simulation. However, the role of inhibition in animals is more complex. First, the

354 16 Discussion: Biological Assumptions and Predictions

neurotransmitter GABA has an excitatory effect on postsynaptic cells in early development, in contrast to its inhibitory effects in the adult. Second, direct electrical stimulation does not create inhibitory responses until about 10 days after birth in the rat (Sutor and Luhmann 1995). (Presumably, inhibition could be evoked before birth in animals such as monkeys with a longer gestation, but this possibility has not yet been studied.) Assuming that the homeostatic mechanisms mentioned above also apply to inhibition, it would be possible to extend LISSOM with an automatic mechanism for introducing inhibition. Once cells begin to activate regularly, a feedback mechanism could automatically increase inhibition to balance excitation. Such a mechanism would reflect the biological process, while allowing enough inhibition to initiate the self-organizing process.

16.2 Genetically Driven Development

In Part III, the hypothesis that input-driven self-organization is based on internally generated patterns as well as external visual inputs was tested in computational simulations. The first assumption was that simple patterns such as retinal waves could drive the early self-organization of V1. Second, higher levels could be similarly organized assuming more complex patterns could be generated in the brainstem as PGO waves and propagated during REM sleep. Third, pattern generation would have been discovered by evolution because it makes it easier to construct complex adaptive systems than hard wiring or general learning. These assumptions are discussed in more detail in this section.

16.2.1 Self-Organization of V1

The shape and distribution of internally generated activity patterns determine how the maps and connections develop in HLISSOM (Chapter 9). Although a variety of such patterns have been detected experimentally, retinal waves (Section 2.3.3) are currently the most likely cause for the early organization of V1. In general, such patterns have to satisfy four main requirements.

First, there needs to be a mechanism for generating internal patterns consistently while V1 develops. Such a mechanism has indeed recently been mapped out in ferrets (Butts, Feller, Shatz, and Rokhsar 1999; Feller 1999; Feller, Butts, Aaron, Rokhsar, and Shatz 1997): retinal waves emerge from the spontaneous behavior of neurons connected together by gap junctions. In essence, one neuron fires randomly, which excites its neighbors, and then regulatory mechanisms step in to keep the activity localized. The result is an activity spot that appears randomly, drifts, and disappears. It is likely that other pattern generation mechanisms will be found in other species once their developing sensory systems are studied in detail. A variety of such mechanisms are already known to exist in the motor systems of different vertebrate and invertebrate species (see Marder and Calabrese 1996 for a review).

Second, the internally generated activation needs to drive the activation of neurons in V1. Which sources of activity actually reach the developing V1 neurons is

16.2 Genetically Driven Development

355

not yet known. However, the retinal wave patterns are known to occur before orientation maps and selectivity can be measured in V1 (see Issa et al. 1999; Wong 1999 for reviews), so they are correctly timed for this role. Further experiments will be needed to verify whether the retinal waves produce significant neural responses in V1 while orientation maps develop, or whether other sources of activity are more prominent at this time.

Third, the developing V1 needs to perceive the internally generated patterns as oriented. So far, no such patterns have been observed. For example, the retinal waves in the ferret are approximately as wide as the V1 receptive fields in the adult animal (Wong et al. 1993); if they were relayed directly to V1, they would activate all the inputs to many of the cortical cells, and would not appear oriented to most of them. However, as the simulations in Section 9.2 showed, the center–surround processing in the ON and OFF channels of the retinal ganglia and LGN could emphasize the edges of the retinal wave patterns enough to give them a distinct orientation. Alternatively, neurons in the LGN may respond only transiently, to the first appearance of activity in each part of the retinal wave, which again would make the patterns seen by V1 more like edges than like large activated areas. Although it is not yet known how the ganglia and the LGN respond to the retinal waves, in either of these cases the broad, internally generated patterns could drive the development of orientation maps.

Fourth, the internally generated patterns need to activate the ON and OFF channels differently. If the same patterns appear in both channels, V1 will not be able to learn the center–surround relationship between them, and will not be able to process natural image input. While the origins of retinal wave patterns are still not fully understood, they do result in different activations in the ON and OFF neurons in the retina (Myhr et al. 2001). It is possible that the activation is generated before it branches into ON and OFF channels, or else activity may be generated separately in each channel. In either case, such a difference should be enough to drive the development of V1 neurons, as was shown in Section 9.2.

Because retinal waves are consistent with these computational requirements and little is known about the properties of other spontaneous activity, they are currently the most likely candidate for prenatal self-organization of V1 orientation maps. Other sources of patterns could contribute to this process in addition or even instead of retinal waves, provided they satisfy the requirements above.

16.2.2 Self-Organization of Higher Levels

The face-selective area simulations in Chapter 10 rely on similar assumptions about the PGO waves (Section 2.3.4) as the V1 simulations do on retinal waves. Like retinal waves, PGO waves are not the only possible cause for prenatal self-organization, but they are the most likely cause for higher levels, given the computational requirements and our current understanding of internally generated patterns. These assumptions are evaluated in this section.

First, a neural mechanism must exist for generating spatial configurations of activity similar to the three-dot pattern (Section 10.2.6). Such activity might occur