Time/Room: Friday, May 10, 1:00 – 3:00 pm, Royal 4-5
Organizer: Uri Polat, Tel-Aviv University
Presenters: Charles Gilbert, Uri Polat, Rudiger von der Heydt, Pieter Roelfsema, Dennis Levi, Dov Sagi
Symposium Description
According to classical models of spatial vision, the output of neurons in the early visual cortex is determined by the local features of the stimuli and integrated at later stages of processing (feedforward). However, experimental results obtained during the last two decades show contextual modulation: local perceptual effects are modulated by global image properties. The receptive field properties of cortical neurons are subject to learning and to top-down influences of attention, expectation and perceptual task. Even at early cortical stages of visual processing neurons are subject to contextual influences that play a role in intermediate level vision, contour integration and surface segmentation, which enables them to integrate information over large parts of the visual field. These influences are not fixed but are subject to experience, enabling neurons to encode learned information. The dynamic properties of context modulations are mediated by an interaction between reentrant signals to the cortex and intrinsic cortical connections, changing effective connectivity within the cortical network. The evolving view of the nature of the receptive field includes contextual influences which change in the long term as a result of perceptual learning and in the short term as a result of a changing behavioral context. In the symposia we will present anatomical, physiological and psychophysical data showing contextual effects in lateral interactions, grouping, border ownership, crowding and perceptual learning.
Presentations
Contextual modulation in the visual cortex
Speaker: Charles Gilbert, The Rockefeller University, New York
Vision is an active process. The receptive field properties of cortical neurons are subject to learning and to top-down influences of attention, expectation and perceptual task. Even at early cortical stages of visual processing neurons are subject to contextual influences that play a role in intermediate level vision, contour integration and surface segmentation, which enables them to integrate information over large parts of the visual field. These influences are not fixed but are subject to experience, enabling neurons to encode learned information. Even in the adult the visual cortex there is considerable plasticity, where cortical circuits undergo exuberant changes in axonal arbors following manipulation of sensory experience. The integrative properties of cortical neurons, the contextual influences that confer selectivity to complex stimuli, are mediated in part by a plexus of long range horizontal connections that enable neurons to integrate information over an area of visual cortex representing large parts of the visual field. These connections are the substrate for an association field, a set of interactions playing a role in contour integration and saliency. The association field is not fixed. Rather, neurons can select components of this field to express difference functional properties. As a consequence neurons can be thought of as adaptive processors, changing their function according to behavioral context, and their responses reflect the demands of the perceptual task being performed. The top-down signal facilitates our ability to segment the visual scene despite its complex arrangement of objects and backgrounds. It plays a role in encoding and recall of learned information. The resulting feedforward signals carried by neurons convey different meanings according to the behavioral context. We propose that these dynamic properties are mediated by an interaction between reentrant signals to the cortex and intrinsic cortical connections, changing effective connectivity within the cortical network. The evolving view of the nature of the receptive field includes contextual influences which change in the long term as a result of perceptual learning and in the short term as a result of a changing behavioral context.
Spatial and temporal rules for contextual modulations
Speaker: Uri Polat, Tel-Aviv University, Tel-Aviv, Israel
Most contextual modulations, such as center-surround and crowding exhibit a suppressive effect. In contrast, collinear configuration is a unique case of contextual modulation in which the effect can be either facilitative or suppressive, depending on the context. Physiological and psychophysical studies revealed several spatial and temporal rules that determine the modulation effect: 1) spatial configuration: collinear configuration can be either facilitative or suppressive, whereas non-collinear configurations may be suppressive; 2) separation between the elements: suppression for close separation that coincides with the size of the receptive field and facilitation outside the receptive field; 3) activity dependent: facilitation for low contrast (near the threshold) and suppression for high contrast; 4) temporal properties: suppression is fast and transient, whereas facilitation is delayed and sustained; 5) attention may enhance the facilitation; 6) slow modulation: perceptual learning can increase the facilitatory effect over a time scale of several days; 7) fovea and periphery: similar rules can be applied when spatial scaling to the size of receptive field is done. It is believed that the role of collinear facilitation is to enhance contour integration and object segmentation, whereas center-surround is important for pop-out. Our recent studies suggest that these rules can serve as a unified model for spatial and temporal masking as well as for crowding.
Border ownership and context
Speaker: Rudiger von der Heydt, The Johns Hopkins University, Baltimore, Maryland, USA
A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world, and producing a structure that enables object-based attention and tracking of object identity. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition, their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the classical receptive field. This talk will review evidence indicating that border ownership selectivity reflects mechanisms of object definition. The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, )3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Some of these findings can be explained by assuming that grouping circuits detect ‘objectness’ according to simple rules, and, via recurrent projections, enhance the low-level feature signals representing the object. This might be the mechanism of object-based attention. Additional circuits may provide persistence and remapping.
Visual cortical mechanisms for perceptual grouping
Speaker: Pieter Roelfsema, Netherlands Institute for Neuroscience, Amsterdam, the Netherlands
A fundamental task of vision is to group the image elements that belong to one object and to segregate them from other objects and the background. I will discuss a new conceptual framework that explains how the binding problem is solved by the visual cortex. According to this framework, two mechanisms are responsible for binding: base-grouping and incremental grouping. Base-groupings are coded by single neurons tuned to multiple features, like the combination of a color and an orientation. They are computed rapidly because they reflect the selectivity of feedforward connections that propagate information from lower to higher areas of the visual cortex. However, not all conceivable feature combinations are coded by dedicated neurons. Therefore, a second, flexible incremental grouping mechanism is required. Incremental grouping relies on horizontal connections between neurons in the same area and feedback connections that propagate information from higher to lower areas. These connections spread an enhanced response (not synchrony) to all the neurons that code image elements that belong to the same perceptual object. This response enhancement acts as a label that tags those neurons that respond to image elements to be bound in perception. The enhancement of neuronal activity during incremental grouping has a correlate in psychology because object-based attention is directed to the features labeled with the enhanced neuronal response. Our recent results demonstrate that feedforward and feedback processing rely on different receptors for glutamate and on processing in different cortical layers.
Crowding in context
Speaker: Dennis Levi, UC Berkeley, Berkeley, CA, USA
In peripheral vision, objects that can be readily recognized when viewed in isolation, become unrecognizable in clutter. This is the interesting phenomenon known as visual crowding. Crowding represents an essential bottleneck, setting limits on object perception, eye movements, visual search, reading and perhaps other functions in peripheral, amblyopic and developing vision (Whitney & Levi, 2011). It is generally defined as the deleterious influence of nearby contours on visual discrimination, but the effects of crowding go well beyond impaired discrimination. Crowding impairs the ability to recognize and respond appropriately to objects in clutter. Thus, studying crowding may lead to a better understanding of the processes involved in object recognition. Crowding also has important clinical implications for patients with macular degeneration, amblyopia and dyslexia. Crowding is strongly dependent on context. The focus of this talk will be on trying to put crowding into context with other visual phenomena.
Perceptual learning in context
Speaker: Dov Sagi, The Weizmann Institute of Science, Rehovot, Israel
Studies of perceptual learning show a large diversity of effects, with learning rate and specificity varying across stimuli and experimental conditions. Most notably, there is an initial fast phase of within session (online) learning followed by a slower phase, taking place over days, which is highly specific to basic image features. Our results show that the latter phase is highly sensitive to contextual modulation. While thresholds for contrast discrimination of a single Gabor patch are relatively stable and unaffected by training, the addition of close flankers induces dramatic improvements of thresholds, indicating increased gain of the contrast response function (“context enabled learning”). Cross-orientation masking effects can be practically eliminated by practice. In texture discrimination, learning was found to interact with slowly evolving adaptive effects reducing the effects of learning. These deteriorative effects can be eliminated by cross-orientation interactions found to counteract sensory adaptation. The experimental results are explained by plasticity within local networks of early vision assuming excitatory-inhibitory interactions, where context modulates the balance between excitation and inhibition. We suggest that reduced inhibitory effects increases learning efficiency, making learning faster and generalizable. Specificity of learning seems to be the result of experience dependent local contextual interactions.