Symposium: Friday, May 17, 2024, 2:30 – 4:30 pm, Talk Room 1
Organizers: Lina Teichmann1, Chris Baker1; 1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, USA
Presenters: Lina Teichmann, Iris I. A. Groen, Diana Dima, Tijl Grootswagers, Rachel Denison
The human visual system dynamically processes input over the course of a few hundred milliseconds to generate our perceptual experience. Capturing the dynamic aspects of the neural response is therefore imperative to understand visual perception. By bringing five speakers together who use a diverse set of methods and approaches, the symposium aims to elucidate the temporal evolution of visual perception from different angles. All five speakers (four female) are early-career researchers based in Europe, Australia, the US, and Canada. Speakers will be allotted 18 minutes of presentation time plus 5 minutes of questions after each talk. In contrast to a lot of the current neuroimaging work, the symposium talks will focus on temporal dynamics rather than localization. Collectively, the work presented will demonstrate that the complex and dynamic nature of visual perception requires data that matches its temporal granularity. In the first talk, Lina Teichmann will present data from a large-scale study focusing on how individual colour-space geometries unfold in the human brain. Linking densely-sampled MEG data with psychophysics, her work on colour provides a test case to study the subjective nature of visual perception. Iris Groen will discuss findings from intracranial EEG studies that characterize neural responses across the visual hierarchy. Applying computational models, her work provides fundamental insights into how the visual response unfolds over time across visual cortex. Diana Dima will speak about how responses evoked by observed social interactions are processed in the brain. Using temporally-resolved EEG data, her research shows how visual information is modulated from perception to cognition. Tijl Grootswagers will present on studies investigating visual object processing. Using rapid series of object stimuli and linking EEG and behavioural data, his work shows the speed and efficiency of the visual system to make sense of the things we see. To conclude, Rachel Denison will provide insights into how we employ attentional mechanisms to prioritize relevant visual input at the right time. Using MEG data, she will highlight how temporal attention affects the dynamics of evoked visual responses. Overall, the symposium aims to shed light on the dynamic nature of visual processing at all levels of the visual hierarchy. It will be a chance to discuss benefits and challenges of different methodologies that will allow us to gain a comprehensive insight into the temporal aspects of visual perception.
Talk 1
The temporal dynamics of individual colour-space geometries in the human brain
Lina Teichmann1, Ka Chun Lam2, Danny Garside3, Amaia Benitez-Andonegui4, Sebastian Montesinos1, Francisco Pereira2, Bevil Conway3,5, Chris Baker1,5; 1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, USA, 2Machine Learning Team, National Institute of Mental Health, Bethesda, USA, 3Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, USA, 4MEG Core Facility, National Institute of Mental Health, Bethesda, USA, 5equal contribution
We often assume that people see the world in a similar way to us, as we can effectively communicate how things look. However, colour perception is one aspect of vision that varies widely among individuals as shown by differences in colour discrimination, colour constancy, colour appearance and colour naming. Further, the neural response to colour is dynamic and varies over time. Many attempts have been made to construct formal, uniform colour spaces that aim to capture universally valid similarity relationships, but there are discrepancies between these models and individual perception. Combining Magnetoencephalography (MEG) and psychophysical data we examined the extent to which these discrepancies can be accounted for by the geometry of the neural representation of colour and their evolution over time. In particular, we used a dense sampling approach and collected neural responses to hundreds of colours to reconstruct individual fine-grained colour-space geometries from neural signals with millisecond accuracy. In addition, we collected large-scale behavioural data to assess perceived similarity relationships between different colours for every participant. Using a computational modelling approach, we extracted similarity embeddings from the behavioural data to model the neural signal directly. We find that colour information is present in the neural signal from approximately 70 ms onwards but that neural colour-space geometries unfold non-uniformly over time. These findings highlight the gap between theoretical colour spaces and colour perception and represent a novel avenue to gain insights into the subjective nature of perception.
Talk 2
Delayed divisive normalisation accounts for a wide range of temporal dynamics of neural responses in human visual cortex
Iris I. A. Groen1, Amber Brands1, Giovanni Piantoni2, Stephanie Montenegro3, Adeen Flinker3, Sasha Devore3, Orrin Devinsky3, Werner Doyle3, Patricia Dugan3, Daniel Friedman3, Nick Ramsey2, Natalia Petridou2, Jonathan Winawer4; 1Informatics Institute, University of Amsterdam, Amsterdam, Netherlands, 2University Medical Center Utrecht, Utrecht, Netherlands, 3New York University Grossman School of Medicine, New York, NY, USA, 4Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
Neural responses in visual cortex exhibit various complex, non-linear temporal dynamics. Even for simple static stimuli, responses decrease when a stimulus is prolonged in time (adaptation), reduce to stimuli that are repeated (repetition suppression), and rise more slowly for low contrast stimuli (slow dynamics). These dynamics also vary depending on the location in the visual hierarchy (e.g., lower vs. higher visual areas) and the type of stimulus (e.g., contrast pattern stimuli vs. real-world object, scenes and face categories). In this talk, I will present two intracranial EEG (iEEG) datasets in which we quantified and modelled the temporal dynamics of neural responses across the visual cortex at millisecond resolution. Our work shows that many aspects of these dynamics are accurately captured by a delayed divisive normalisation model in which neural responses are normalised by recent activation history. I will highlight how fitting this model to the iEEG data unifies multiple disparate temporal phenomena in a single computational framework, thereby revealing systematic differences in temporal dynamics of neural population responses across the human visual hierarchy. Overall, these findings suggest a pervasive role of history-dependent delayed divisive normalisation in shaping neural response dynamics across the cortical visual hierarchy.
Talk 3
How natural action perception unfolds in the brain
Diana Dima1, Yalda Mohsenzadeh1; 1Western University, London, ON, Canada
In a fraction of a second, humans can recognize a wide range of actions performed by others. Yet actions pose a unique complexity challenge, bridging visual domains and varying along multiple perceptual and semantic features. What features are extracted in the brain when we view others’ actions, and how are they processed over time? I will present electroencephalography work using natural videos of human actions and rich feature sets to determine the temporal sequence of action perception in the brain. Our work shows that action features, from visual to semantic, are extracted along a temporal gradient, and that different processing stages can be dissociated with artificial neural network models. Furthermore, using a multimodal approach with video and text stimuli, we show how conceptual action representations emerge in the brain. Overall, these data reveal the rapid computations underlying action perception in natural settings. The talk will highlight how a temporally resolved approach to natural vision can uncover the neural computations linking perception and cognition.
Talk 4
Decoding rapid object representations
Tijl Grootswagers1, Amanda K. Robinson2; 1The MARCS Institute for Brain, Behaviour and Development, School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, NSW, Australia, 2Queensland Brain Institute, The University of Queensland, Brisbane, QLD, Australia
Humans are extremely fast at recognising objects, and can do this very reliably. Information about objects and object categories emerges within 200 milliseconds in the human visual system, even under difficult conditions such as occlusion or low visibility. These neural representations can be highly complex and multidimensional, despite relying on limited visual information. Understanding emerging object representations necessitates time-resolved neuroimaging methods with millisecond precision, such as EEG and MEG. Recent time-resolved neuroimaging work has used decoding methods in rapid serial visual presentation designs to show that relevant object-information about multiple sequentially presented objects is robustly encoded by the brain. This talk will highlight recent research on the time course of object representations in rapid image sequences, focusing on three key findings: (1) object representations are highly automatic, with robust representations emerging even with fast-changing visual input. (2) emerging object representations are highly robust to changes in context and task, suggesting strong reliance on feedforward processes. (3) object representational structures are highly consistent across individuals, to the extent that neural representations are predictive of independent behavioural judgments on a variety of tasks. Together, these findings suggest that the first sweep of information through the visual system contains highly robust information that is readily available for read-out in behavioural decisions.
Talk 5
Isolating neural mechanisms of voluntary temporal attention
Rachel Denison1,2, Karen Tian1,2, Jiating Zhu1, David Heeger2, Marisa Carrasco2; 1Boston University, Department of Psychological and Brain Sciences, USA, 2New York University, Department of Psychology and Center for Neural Science, USA
To handle the continuous influx of visual information, temporal attention prioritizes visual information at task-relevant moments in time. We first introduce a probabilistic framework that clarifies the conceptual distinction and formal relation between temporal attention, linked to timing relevance, and temporal expectation, linked to timing predictability. Next, we present two MEG studies in which we manipulated temporal attention while keeping expectation constant, allowing us to isolate neural mechanisms specific to voluntary temporal attention. Participants were cued to attend to one of two sequential grating targets with predictable timing, separated by a 300 ms SOA. The first study used time-resolved steady-state visual evoked responses (SSVER) to investigate how temporal attention modulates anticipatory visual activity. In the pre-target period, visual activity (measured with a background SSVER probe) steadily ramped up as the targets approached, reflecting temporal expectation. Furthermore, we found a low-frequency modulation of visual activity, which shifted approximately 180 degrees in phase according to which target was attended. The second study used time-resolved decoding and source reconstruction to examine how temporal attention affects dynamic target representations. Temporal attention to the first target enhanced its orientation representation within a left fronto-cingulate region ~250 ms after stimulus onset, perhaps protecting it from interference from the second target within the visual cortex. Together these studies reveal how voluntary temporal attention flexibly shapes pre-target periodic dynamics and post-target routing of stimulus information to select a task-relevant stimulus within a sequence.