Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Ординатура / Офтальмология / Английские материалы / Computational Maps in the Visual Cortex_Miikkulainen_2005

.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
16.12 Mб
Скачать

17.1 Extensions to the LISSOM Mechanisms

377

As described in Section 8.2.3, HLISSOM includes a divisive normalization term to achieve such contrast invariance. While this method is simple and works well in most cases, it is an abstraction and should be replaced by biologically more accurate mechanisms.

One problem with divisive normalization is that it penalizes any activity in the anatomically circular receptive fields that does not match the neuron’s weights. As a side-effect, the V1 responds less strongly to input where the stimuli are closely spaced. For example, the V1 network responds less to a high-frequency square-wave grating than to the same pattern with every other bar removed.

In future work, it may be possible remove this limitation by using a push–pull arrangement of weights rather than full-RF normalization (Ferster 1994; Hirsch, Gallagher, Alonso, and Martinez 1998b; Troyer, Krukowski, Priebe, and Miller 1998). With a push–pull RF, cortical neurons receive both excitatory and inhibitory afferent inputs from different parts of the retina, rather than the purely excitatory input, as in LISSOM and in most other models. One difficulty with push–pull weights is that the inhibitory weights need to connect the neuron with regions in the retina that are anti-correlated with it, and therefore such weights cannot be learned through Hebbian learning. Thus, either a new learning rule or a more complicated local circuit in the cortex will need to be developed so that push–pull weights can self-organize.

Such an extension should allow LISSOM to self-organize as before, but would represent the afferent circuitry more accurately, and also lead to reliable responses to a wider variety of input patterns.

17.1.3 Modeling Substructure Within Columns

LISSOM is a column-level model, and each unit in the model stands for the response patterns of a set of cells in a vertical column in the cortex. An important extension is to take more of the structure within the column into account, more precisely representing the fine-grained structure and processing that occurs at this level.

First, the responses recorded from LISSOM represent averages of multiple cells. These responses are a good match to data obtained with optical imaging techniques, which also measure averages over multiple nearby cells. For instance, LISSOM units in map regions near pinwheel centers and fractures in orientation maps tend to have lower orientation selectivity, just as in maps measured using optical imaging (Blasdel 1992b).

Interestingly, when the pinwheel neuron responses are measured using microelectrode recordings, they appear as selective as neurons in other parts of the map (Maldonado et al. 1997). However, the pinwheel centers do have a wider variety of orientation preferences in a small area. Thus, optical imaging techniques report lower selectivity probably because some neurons in that area respond to each of the different orientations.

In order to model the detailed behavior of individual neurons within pinwheel centers, LISSOM could be extended so that each unit in the current model is represented by a set of different units. Connectivity between each unit could be determined stochastically, so that each unit could function differently but the average response

378 17 Future Work: Computational Directions

of all the units in the column would be similar to the current LISSOM model. Such a model is currently too expensive to simulate at the map level, but could become feasible in near future, especially through techniques outlined in Chapter 15. In this way, LISSOM could be extended to model low-level neural phenomena more accurately within the same basic framework.

Second, the circuitry and subfunctions in the column can be modeled in more detail. The SG model of the cortical column in PGLISSOM is already a step in this direction. Although it was motivated primarily on computational grounds (i.e. in order to implement both self-organization and grouping in the same map; Section 11.1), it has an intriguing biological implementation in terms of the layered structure of the cortex (Section 16.3.3). This interpretation can be expanded by implementing the circuitry in more detail. For example, the broad long-range inhibition in GMAP can be replaced by local inhibitory interneurons as outlined in Section 16.1.4, making it possible to determine precisely how layer 2/3 contributes to self-organization. Similarly, the connectivity within the column can be modeled in more detail, and the contribution of the deeper layers on synchronization analyzed.

Such more detailed models of cortical columns would allow understanding computations in maps more precisely, leading to predictions that can be verified with existing cellular recording techniques.

17.1.4 Phase-Invariant Responses

The behavior of cortical columns in the current LISSOM model is based on simple cells only, i.e. cells that respond most strongly when their preferred input is aligned with the ON and OFF subfields of their receptive field (Section 2.1.1). Such cells are thought to be the first in V1 to show orientation selectivity, but V1 also includes cells with more general responses (Hubel and Wiesel 1968). Termed complex cells, they respond to any input within their RF regardless of the alignment; in other words, their response is phase invariant. Such responses have been observed in the visual cortex, although the circuitry that gives rise to them is not well understood.

Most current models with phase-invariant responses are hierarchical: Complex cell behavior is obtained by pooling outputs from several simple cells (e.g. Hyvarinen¨ and Hoyer 2001; Weber 2001). An alternative approach is to establish local recurrent connections within a single set of V1 neurons (Chance, Nelson, and Abbott 1999): Phase-invariant responses can then occur among the simple cells through recurrent excitation. It is not yet clear which approach is a closer match to how phase-invariant responses arise in V1.

The LISSOM model could be extended with additional sheets of neurons in V1 representing complex cells, or with a local circuit that pools the responses of simple cells into phase-invariant ones. Both of these extensions involve connecting neurons in a small local area, and assume that the area includes neurons that respond to different phases. Phase is indeed distributed randomly within a column and between nearby columns in animals (DeAngelis et al. 1999). The likely reason is that phase in the input may effectively be random over short time scales due to small eye movements known as microsaccades (Martinez-Conde, Macknik, and Hubel 2000).

17.2 Modeling New Phenomena with LISSOM

379

Current LISSOM simulations tend to group RFs by phase similarity (in addition to similarity of orientation, ocular dominance, and direction selectivity), because neurons with similar phase preferences are activated together. For the phaseinvariance extensions to work, LISSOM needs to be further augmented with a learning rule that associates stimuli over time, such as the trace learning rule (Foldi¨ak´ 1991a). In this variant of Hebbian learning, connections between neurons are strengthened if they respond soon after one another, instead of having to respond simultaneously. Based on microsaccade-like movements during training on visual images, the model should then develop phase-invariant responses and random phase distributions like those seen in animals. Such a model could be used to compare the two alternatives, and to draw predictions for future biological experiments.

17.1.5 Time-Lagged Activation

The direction map simulations in Sections 5.5 and 5.6 focused on how LGN cells with different lags can result in direction-selective responses in V1. However, any other source of different delays for signals reaching V1 neurons could also contribute to direction selectivity (Clifford and Ibbotson 2002). Since the biological mechanisms underlying such selectivity are not well understood, computational models could serve a pivotal role in evaluating the alternatives.

For example, different lags in the lateral connections in the cortical maps could be used to represent motion. If connections from nearby locations make synaptic connections on distal dendrites and connections from farther away on proximal dendrites, their effect would arrive at the soma at the same time. A coincidence detection mechanism could then detect these events and generate a spike, allowing the neuron to respond to moving inputs in a specific location, direction, and velocity. Alternatively, reverberating feedback loops (Amit 1994; Hebb 1949; Seung, Lee, Reis, and Tank 2000; Wang 2001) within V1 or between V1 and other areas could act as memory for previous inputs, providing information about past input patterns just as the lagged cells and connections do.

Future simulations can focus on where such lags might occur in different species, and how those differences can result in direction selectivity, leading to predictions for future biological experiments.

17.2 Modeling New Phenomena with LISSOM

In addition to the topics covered by current LISSOM simulations, the model can be used to understand a wide range of other visual phenomena. This section proposes a number of such studies, focusing on development, visual function, grouping, and scaling up to larger networks and to higher levels of visual processing. Each project is possible future work using the Topographica software described in Section 17.4.

380 17 Future Work: Computational Directions

17.2.1 Spatial Frequency, Color, and Disparity in V1

The LISSOM simulations in Chapter 5 focused on how orientation, ocular dominance, and direction maps develop in V1. However, the approach is very general and can be easily extended to include other dimensions of visual input, such as spatial frequency, color, and disparity. Maps for each of these dimensions can be developed by generating input that varies in these dimensions, self-organizing the model based on these inputs, and measuring the response properties of V1 neurons that result.

For instance, the current simulations are based on single-size ON and OFF cells (i.e. a single DoG center and surround radius), and thus include only a limited range of spatial frequencies. Spatial frequency maps can be simulated by including multiple sets of LGN cells, each with a different DoG size. The V1 network will organize into different groups preferring different spatial frequencies, which can then be compared against experimental spatial frequency maps (such as those observed by Issa et al. 2001).

Color maps can be developed in LISSOM by including separate groups of retinal and LGN neurons for the different colors. Each eye will be represented by three sheets of photoreceptors R, G, and B, corresponding to long, medium, and short wavelengths. One sheet of ON cells and another of OFF cells in the LGN will have center and surround RFs on all three photoreceptor sheets, and thus respond to differences in intensity. Eight other LGN sheets are connected to the photoreceptors in a manner that establishes four red/green opponent RF types (such as excitatory center on the red sheet and inhibitory surround on the green sheet), and four blue/yellow opponent RF types (such as excitatory center on the blue sheet and inhibitory surround on the red and green sheets). V1 receives input from all of these LGN cells, and should develop patches selective for colored areas of the input (e.g. regions with greater R activation than G activation). The model can be validated by comparing its color-selectivity structure to the color-selective areas found in biological V1 (Landisman and Ts’o 2002b). If the model is extended to include V2 (as discussed below), similar comparisons can be made with color maps in V2 (Conway 2003; Ts’o, Roe, and Gilbert 2001; Xiao, Wang, and Felleman 2003). It will also be interesting to determine whether the distribution of color representations in the model matches the statistical properties of color in natural images (Doi, Inui, Lee, Wachtler, and Sejnowski 2003; Lee, Wachtler, and Sejnowski 2002b), and whether lateral interactions contribute to constant perception of color under different lighting conditions (Barnard, Cardei, and Funt 2002; Brainard 2004).

Modeling disparity does not require additional LGN cells, but will require input patterns slightly offset in each eye, as they are in stereoscopic images. Through self-organization, such patterns will result in groups of cells in V1 that prefer different disparities, i.e. different distance between corresponding features. The model can again be validated by comparing with experimental results for disparity maps measured using optical imaging (such data are currently only available for V2; Ts’o et al. 2001).

Compared with orientation, ocularity, and direction, much less is known about how spatial frequency, color, and disparity are represented in the brain. Extending the

17.2 Modeling New Phenomena with LISSOM

381

model to these dimensions should lead to a number of specific, testable predictions, significantly advancing our understanding of how input features are represented in the visual cortex.

17.2.2 Differences between Species

The simulations in this book have drawn upon experimental data from multiple species, including human, monkey, cat, tree shrew, and ferret. This approach was necessary because most of the relevant experiments have so far been performed in only one species. For instance, only in the cat have lateral connections been measured in strabismic animals (Lowel¨ 1994; Lowel¨ and Singer 1992), and only in the ferret have direction maps been measured in V1 (Weliky et al. 1996). Because the primary visual cortex is remarkably similar across these species, pooling the experimental data in this way is generally valid. However, there are several differences between species as well; a computational model such as LISSOM can be instrumental in understanding which differences are significant and what their origins are.

Some of the main species-specific differences include: (1) Ocular dominance maps in the cat have a patchier, less stripe-like organization than in the monkey (Blasdel 1992a; Lowel¨ 1994); (2) in the cat, V1 orientation maps have only a weak bias for horizontal and vertical orientations, unlike in the ferret (Muller¨ et al. 2000);

(3)orientation and ocular dominance patches are less likely to intersect at right angles in cat than in monkey (Muller¨ et al. 2000; Obermayer and Blasdel 1993); and

(4)in ferrets, some regions of the central visual field are entirely monocular, rather than binocular with alternating ocular dominance stripes as in other species (White, Bosking, Williams, and Fitzpatrick 1999).

There are a number of possible sources for such differences that could be modeled in LISSOM: (1) The shape of the head and the position of the eyes differ between species, which affects how correlated the patterns between the eyes are. Such differences in turn will change how the ocular dominance maps develop. (2) The anatomy and physiological properties of the retina differ between species. For instance, retinal ganglion cells in the rabbit are selective for motion direction, unlike in other species (see Clifford and Ibbotson 2002 for a review). (3) Whereas cats have time-lagged cells at the LGN level, similar cells have not yet been found in monkeys (Hubener et al. 1997; Lowel¨ et al. 1988; Saul and Humphrey 1992). As a result, these species may represent time-varying input differently, which in turn may affect how the different features are organized in the cortex. (4) The various areas of the visual cortex, including V1, have significantly different sizes in species such as the ferret and the monkey, and the cortical area devoted to the corresponding visual area differs as a result (Kaas 2000). (5) Various developmental events (e.g. when spontaneous retinal activity stops and orientation maps emerge) take place at different times in different species (Blasdel, Obermayer, and Kiorpes 1995; Issa et al. 1999).

(6)Internally generated activity patterns differ between species, potentially changing how the animal develops prenatally and how postnatal visual experience affects them (Jouvet 1998).

382 17 Future Work: Computational Directions

Species-specific differences can be modeled in LISSOM using different parameter values, demonstrating how self-organization depends on the specific input patterns seen by a developing visual area. Such hypotheses are difficult to test experimentally, but a computational model like LISSOM is ideal for the task: It is possible to set up hypothetical developmental scenarios and observe their outcome. In this way, it may be possible to determine which of the known anatomical and environmental differences could be responsible for the different maps and responses in different species.

17.2.3 Prenatal and Early Postnatal Development of V1

The simulations in Chapter 9 showed how orientation maps can be constructed in a self-organizing process that takes place both before and after birth. Once other feature dimensions have been simulated for a particular species (as proposed in Section 17.2.1), LISSOM can be used to construct a realistic and detailed model of how all the dimensions develop at once, based on internally generated activity and postnatal visual experience. A similar model can be built to understand how the ability to integrate contours could be constructed.

To allow for a detailed comparison, such studies need to focus on a single species. Currently, the most detailed data on the early development of maps are available for the ferret (although the cat is also a good candidate). The LISSOM model can be set up with parameters closely tied to measurements in ferrets, and the initial development of maps can then be simulated in detail. As mentioned in Section 2.1.4, experiments have shown that dark rearing, eyelid suturing, and modifying the visual environment can significantly change how maps develop in ferrets (Crair et al. 1998; Crowley and Katz 1999; Godecke¨ and Bonhoeffer 1996; Stellwagen and Shatz 2002; Weliky and Katz 1997), and it should be possible to replicate each of these experiments in the model.

Ocular dominance will be a particularly interesting test case, because OD maps have been found in animals before they have had any visual experience. Whether neural activity is required to develop them initially is currently controversial (Crowley and Katz 1999; Stellwagen and Shatz 2002); LISSOM simulations could help determine what types of activity are sufficient for this process. For instance, LISSOM simulations in Section 5.4.4 suggest that realistic adult maps require correlation between the two eyes, yet patterns like retinal waves are not correlated between the eyes. One possibility is that the OD map is constructed in two phases: prenatally with uncorrelated inputs (which leads to strabismic-like maps), and postnatally with correlated images that differ primarily in brightness. Simulations could demonstrate how an initial strabismic-like OD map changes into an adult-like OD map with visual experience, a hypothesis that could then be tested in future animal experiments. Alternatively, the developing cortex may receive simultaneous or alternating input from two sources, one uncorrelated (e.g. retinal waves) and one identical for both eyes (e.g. brainstem input during sleep). Simulations of this process should show adult-like maps at all stages of development, which again could be compared with

17.2 Modeling New Phenomena with LISSOM

383

animal measurements. The results of such comparisons would allow distinguishing between the two possible mechanisms of constructing the OD map.

The origin of direction selectivity is another interesting research issue because this property appears to develop differently from orientation and ocular dominance. Specifically, direction maps have not been detected in young ferrets raised in darkness, even though orientation and ocular dominance maps have been found robustly (White and Fitzpatrick 2003). Assuming that retinal waves result in orientation selectivity, perhaps the waves do not move fast enough or often enough to cause direction selectivity to emerge at the same time. Alternatively, perhaps the signals reaching V1 during early development do not have sufficiently different lag times, which again would prevent the direction selectivity from emerging. Through simulation studies, it should be possible to determine whether the amount of motion in retinal waves can lead to direction maps, or whether only orientation maps will develop. In the latter case, further simulations could verify whether direction selectivity can develop within an existing orientation and ocular dominance map based on postnatal training with moving natural images. Such simulations would result in predictions for future biological experiments, making it possible to determine how direction selectivity develops in the visual cortex.

Prenatal and postnatal simulations can also be set up to understand how contour integration circuitry is constructed. As was discussed in Section 16.4.8, Gaussian inputs such as LGN-filtered retinal waves should result in cocircular lateral connectivity patterns prenatally, which would allow the network to perform rudimentary contour integration. The lateral connections would be further refined through learning from visual inputs, eventually resulting in adult performance. Although it might be possible to verify this prediction experimentally already, further computational simulations would allow making the predictions much more detailed. PGLISSOM could be trained prenatally with Gaussians resembling input that the developing V1 receives, and postnatally with natural inputs (Section 17.2.8). Its ability to form synchronized representations for contours could then be tested at different stages of development, resulting in specific predictions for biological experiments.

Such computational studies would potentially allow accounting for all of the known data on how V1 develops in early life, and identifying specific gaps in our knowledge that can be addressed in further biological experiments.

17.2.4 Postnatal Internally Generated Patterns

When the V1 maps are constructed in two separate learning phases, prenatal and postnatal, the influence of the internally generated and environmentally driven stimuli can be clearly identified. Such a separation is a good model of spontaneous activity in the developing sensory areas, such as retinal waves, because the waves disappear at eye opening (Wong et al. 1993). But other activity, such as that during REM sleep, continues throughout development and adulthood (Callaway et al. 1987). These postnatal patterns suggest that pattern generation may also have a significant role beyond prenatal development.

384 17 Future Work: Computational Directions

Specifically, postnatal internally generated activity patterns may be interleaved with waking experience to ensure that postnatal development does not entirely overwrite the prenatal organization. Such postnatal patterns may explain why altered environments can only be learned partially (as found by Sengpiel et al. 1999), and why the animal spends so much time in REM sleep during the time when its neural structures are most plastic (Roffwarg et al. 1966). The postnatal patterns may help ensure that the visual system does not become too closely adapted to a particular environment (a phenomenon called “overtraining” in machine learning), which would limit its generality.

Such patterns would be needed only in systems that remain plastic in the adult, and they may provide a simple way to trade off between adaptability and genetically specified function in such systems. In future simulations, it should be possible to study how such interleaving interacts with experience. The results could be first validated with biological observations, such as those of Sengpiel et al. (1999), and then expanded to propose further experiments on how genetic bias is expressed in self-organizing systems.

17.2.5 Tilt Illusions

In Chapter 7, tilt aftereffects were shown to arise as interactions between subsequent visual patterns in the LISSOM model. Simultaneous inputs can also interact (as was demonstrated in a limited scale in Section 14.2.3), and cause distortions in perceived orientations. Such an effect, called the tilt illusion, is well documented psychophysically (Calvert and Harris 1988; Carpenter and Blakemore 1973; Gilbert and Wiesel 1990; O’Toole 1979; Smith and Over 1977; Wenderoth and Johnstone 1988; Westheimer 1990), but how it can arise from the two-dimensional spatial interactions in the cortex has not yet been demonstrated computationally.

In LISSOM, two stimuli should interact with each other as the lateral interactions settle, inhibiting neurons tuned to orientations between them. This effect should drive the two perceived orientations away from each other. Such an explanation was originally proposed by Carpenter and Blakemore (1973), and the principles have been demonstrated recently in an abstract model of orientation (Mundel, Dimitrov, and Cowan 1997).

With LISSOM, it should be possible to show how the tilt illusion depends on specific lateral connections, provided two extensions are made to the current simulations. First, because overlapping patterns could cause confounding effects, the inputs need to be separated spatially (as they are in psychophysical experiments). As a result, the radius of lateral inhibitory connections must be larger than that used in the tilt aftereffect simulations. Second, to self-organize such long connections, the training inputs would have to be correlated over a long range (as they are when the model is trained with natural images; Section 9.3.1). Spatially separated neurons will then develop lateral inhibitory connections, which causes the angle expansion. If it turns out that such connections would have to be longer than what can be simulated computationally, it may be possible to use shorter connections and more closely

17.2 Modeling New Phenomena with LISSOM

385

spaced test patterns by decoding the perceived orientations of overlapping lines using probabilistic methods (such as those of Zemel, Dayan, and Pouget 1998). In the extended model, the magnitude of the tilt illusion can be measured by computing the perceived orientations from each line alone, and comparing with the perceived orientation when both lines are presented at once.

Alternatively, it may be possible to test tilt illusions more economically using a combined orientation and ocular dominance simulation. In humans, when a different pattern is presented to the same location in each eye, they interact just as do two patterns presented to separate locations in one (or both) eyes (Carpenter and Blakemore 1973). Thus, it should be possible to test tilt illusions already in the network of Section 5.6.2, without first having to self-organize a model with a longer inhibitory radius.

Indirect tilt illusions similar to the indirect tilt aftereffect have been found in humans, and it might be possible to model them in LISSOM as well. Such an effect would arise if weakly activated units were facilitated by units at distant orientations; such facilitation could be mediated by lateral connections whose effective sign depends on local contrast, as it would in the extension to LISSOM proposed in Section 16.1.4. Implementing such extensions and observing their effects constitutes a most interesting direction of future work.

17.2.6 Other Visual Aftereffects

Many visual aftereffects similar to the tilt aftereffect are known to exist in biological vision. LISSOM could be extended to gain insight into these effects as well.

In addition to orientation, aftereffects of motion, spatial frequency, size, position, curvature, and color have been documented in humans (Barlow 1990; Howard and Templeton 1966; Schrater, Knill, and Simoncelli 2001; Wolfe 1984). For instance, a movement aftereffect known as the waterfall illusion can be induced by prolonged viewing of a moving stimulus: Stationary stimuli appear to be moving in the opposite direction (Kohn and Movshon 2003). Recent work also suggests that high-level tasks such as face perception have similar aftereffects (Leopold, O’Toole, Vetter, and Blanz 2001; Webster and MacLin 1999; Zhao and Chubb 2001). In all of these cases, the cortex adapts to a long-lasting stimulus, changing the perception of subsequent stimuli.

Using a LISSOM model that includes maps for the relevant features, it should be possible to demonstrate aftereffects for each of these dimensions. In each case, the effects would occur through short-term adaptation in specific lateral connections between feature-selective cells. For instance, presenting a continuously moving image to the direction map of Section 5.5 should result in a realistic movement aftereffect. Presenting single faces to the face-selective network of Section 10.3 should result in face-specific aftereffects.

Analogous aftereffects have also been found for other modalities, such as hearing, touch, muscle positioning, and posture (Howard and Templeton 1966). For instance, hearing a sound in one location can influence the perceived location of later sounds. That is, after adaptation, sounds presented in nearby locations appear to be

386 17 Future Work: Computational Directions

farther away than they actually are, and the effect peaks at a certain distance, much like the direct tilt aftereffect. If development in these areas can be modeled with LISSOM (as is expected), aftereffects should also occur in such models. In this way, LISSOM could be used to provide a simple, unified explanation for a variety of perceptual aftereffect phenomena across modalities.

17.2.7 Hyperacuity

Like models of illusions and aftereffects, a LISSOM model of hyperacuity can provide useful information about how primary visual cortex adapts.

Performance in hyperacuity tasks, such as deciding whether two lines of same orientation are separated by a small perpendicular offset, improves with practice (Fahle, Edelman, and Poggio 1995; Weiss, Edelman, and Fahle 1993). The improvement occurs even without any feedback indicating whether each judgment is correct. The effect is specific to position and orientation, but transfers to some degree between eyes. This transfer is thought to indicate that at least some part of the effect arises in V1, because V1 is the first stage in the visual pathway where binocular inputs are combined.

Shiu and Pashler (1992) reported similar results for orientation discrimination tasks, although they found that the effect also depends on cognitive factors. Performance improved with practice only if the subjects were directed to pay attention to the orientation. However, the effect only occurred at the specific retinal location where the training examples had been presented, ruling out any deliberate cognitive strategy that the subject might have learned during the experiment. This result suggests that attentional mechanisms may activate circuitry in V1 (or other early visual areas) that regulates adaptation.

The LISSOM activation and learning mechanisms should be able to account for such basic psychophysical learning phenomena. The active units and lateral connections between them would adapt during repeated presentations. Over time, the area of the cortical map responding to those features would expand, allowing smaller differences to be represented and discriminated. However, the attentional effects might require an extension to high-level feedback, as discussed in Section 17.2.13. Such extended experiments might help clarify how and when adaptation occurs in early vision.

17.2.8 Grouping with Natural Input

Human contour integration performance depends on several stimulus dimensions in addition to orientation, including how random the background is, how jagged the path is, how much the elements of the contour are separated, and what spatial frequency, relative phase, color, and contrast the elements have (Field et al. 1993; Geisler et al. 2001; McIlhagga and Mullen 1996; Pettet et al. 1998). While human performance has been characterized in detail along most of these dimensions, their effect has not yet been analyzed computationally.