Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health
Presenters: Aina Puce, Frank Tong, Janneke Jehee, Justin Gardner, Marieke Mur
Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. In the year 2000, we knew about retinotopy and category-selectivity, but neuroimaging was still evolving. Now in 2020, the field is taking an increasingly computational approach to applying neuroimaging data to understanding questions about vision. The aim of this symposium is to provide both a historical context and a forward-focus for the role of neuroimaging in vision science. Our speakers are a diverse mix of pioneering researchers in the field who applied neuroimaging in the early days of the technique, and those who have more recently continued to push the field forward by creative application of imaging techniques. We have also selected speakers who use a range of different methodological approaches to investigate both low-level and high-level vision, including computational and modeling techniques, multivariate pattern analysis and representational similarity analysis, and methods that aim to link brain to behavior. The session will begin with a short 5-10 min Introductory talk by Susan Wardle to provide context for the symposium. Talks by the five selected speakers will be 20 minutes each; with 1-2 mins available for clarification questions after each talk. The session will end with a longer 10-15 min general discussion period. In the first talk, Aina Puce will consider the contribution made by multiple neuroimaging techniques such as fMRI and M/EEG towards understanding the social neuroscience of face perception, and how technological advances are continuing to shape the field. In the second talk, Frank Tong will discuss progress made in understanding top-down feedback in the visual system using neuroimaging, predictive coding models, and deep learning networks. In the third talk, Janneke Jehee will argue that a crucial next step in visual neuroimaging is to connect cortical activity to behavior, using perceptual decision-making as an illustrative example. In the fourth talk, Justin Gardner will discuss progress made in using neuroimaging to link cortical activity to human visual perception, with a focus on quantitative linking models. In the final talk, Marieke Mur will reflect on what fMRI has taught us about high-level visual processes, and outline how understanding the temporal dynamics of object recognition will play an important role in the development of the next generation of computational models of human vision. Overall, the combination of a historical perspective and an overview of current trends in neuroimaging presented in this symposium will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science.
Presentations
Technological advances are the scaffold for propelling science forward in social neuroscience
Aina Puce1; 1Indiana University
Over the last 20 years, neuroimaging techniques [e.g. EEG/MEG, fMRI] were used to map neural activity within a core and extended brain network to study how we use social information from faces. By the 20th century’s end, neuroimaging methods had identified the building blocks of this network, but how these parts came together to make a whole was unknown. In 20 years, technological advances in data acquisition and analysis have occurred in a number of spheres. First, network neuroscience has progressed our understanding of which brain regions functionally connect with one another on a regular basis. Second, improvements in white matter tract tracing have allowed putative underlying white matter pathways to be identified for some functional networks. Third, [non-]invasive brain stimulation has allowed the identification of some causal relationships between brain activity and behavior. Fourth, technological developments in portable EEG and MEG systems propelled social neuroscience out of the laboratory and into the [ecologically valid] wide world. This is changing activation task design as well as data analysis. Potential advantages of these ‘wild type’ approaches include the increased signal-to-noise provided by a live interactive 3D visual stimulus e.g. another human being, instead of an isolated static face on a computer monitor. Fifth, work with machine learning algorithms has begun to differentiate brain/non-brain activity in these datasets. Finally, we are finally ‘putting the brain back into the body’ – whereby recordings of brain activity are made in conjunction with physiological signals including EKG, EMG, pupil dilation, and eye position.
Understanding the functional roles of top-down feedback in the visual system
Frank Tong1; 1Vanderbilt University
Over the last 20 years, neuroimaging techniques have shed light on the modulatory nature of top-down feedback signals in the visual system. What is the functional role of top-down feedback and might there be multiple types of feedback that can be implemented through automatic and controlled processes? Studies of voluntary covert attention have demonstrated the flexible nature of attentional templates, which can be tuned to particular spatial locations, visual features or to the structure of more complex objects. Although top-down feedback is typically attributed to visual attention, there is growing evidence that multiple forms of feedback exist. Studies of visual imagery and working memory indicate the flexible nature of top-down feedback from frontal-parietal areas to early visual areas for maintaining and manipulating visual information about stimuli that are no longer in view. Theories of predictive coding propose that higher visual areas encode feedforward signals according to learned higher order patterns, and that any unexplained components are fed back as residual error signals to lower visual areas for further processing. These feedback error signals may serve to define an image region as more salient, figural, or stronger in apparent contrast. Here, I will discuss both theory and supporting evidence of multiple forms of top-down feedback, and consider how deep learning networks can be used to evaluate the utility of predictive coding models for understanding vision. I will go on to discuss what important questions remain to be addressed regarding the nature of feedback in the visual system.
Using neuroimaging to better understand behavior
Janneke Jehee1,2; 1Donders Institute for Brain, Cognition and Behavior, 2Radboud University Nijmegen, Nijmegen, Netherlands
Over the past 20 years, functional MRI has become an important tool in the methodological arsenal of the vision scientist. The technique has led to many amazing discoveries, ranging from human brain areas involved in face perception to information about stimulus orientation in early visual activity. While providing invaluable insights, most of the work to date has sought to link visual stimuli to a cortical response, with far less attention paid to how such cortical stimulus representations might give rise to behavior. I will argue that a crucial next step in visual neuroimaging is to connect cortical activity to behavior, and will illustrate this using our recent work on perceptual decision-making.
Using neuroimaging to link cortical activity to human visual perception
Justin Gardner1; 1Stanford University
Over the last 20 years, human neuroimaging, in particular BOLD imaging, has become the dominant technique for determining visual field representations and measuring selectivity to various visual stimuli in the human cortex. Indeed, BOLD imaging has proven decisive in settling long standing disputes that other techniques such as electrophysiological recordings of single neurons provided only equivocal evidence for. For example, by showing that cognitive influences due to attention or perceptual state could be readily measured in so-called early sensory areas. Part of this success is due to the ability to make precise behavioral measurements through psychophysics in humans which can quantitatively measure such cognitive effects. Leveraging this ability to make quantitive behavioral measurements with concurrent measurement of cortical activity with BOLD imaging, we can provide answers to a central question of visual neuroscience: What is the link between cortical activity and perceptual behavior? To make continued progress in the next 20 years towards answering this question, we must turn to quantitative linking models that formalize hypothesized relationships between cortical activity and perceptual behavior. Such quantitative linking models are falsifiable hypotheses whose success or failure can be determined by their ability or inability to quantitatively account for behavioral and neuroimaging measurements. These linking models will allow us to determine the cortical mechanisms that underly visual perception and account for cognitive influences such as attention on perceptual behavior.
High-level vision: from category selectivity to representational geometry
Marieke Mur1; 1Western University, London ON, Canada
Over the last two decades, functional magnetic resonance imaging (fMRI) has provided important insights into the organization and function of the human visual system. In this talk, I will reflect on what fMRI has taught us about high-level visual processes, with an emphasis on object recognition. The discovery of object-selective and category-selective regions in high-level visual cortex suggested that the visual system contains functional modules specialized for processing behaviourally relevant object categories. Subsequent studies, however, showed that distributed patterns of activity across high-level visual cortex also contain category information. These findings challenged the idea of category-selective modules, suggesting that these regions may instead be clusters in a continuous feature map. Consistent with this organizational framework, object representations in high-level visual cortex are at once categorical and continuous: the representational code emphasizes category divisions of longstanding evolutionary relevance while still distinguishing individual images. This body of work provides important insights on the nature of high-level visual representations, but it leaves open how these representations are dynamically computed from images. In recent years, deep neural networks have begun to provide a computationally explicit account of how the ventral visual stream may transform images into meaningful representations. I will close off with a discussion on how neuroimaging data can benefit the development of the next generation of computational models of human vision and how understanding the temporal dynamics of object recognition will play an important role in this endeavor.