Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 2
Organizer(s): Caitlin Mullin, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of the Technology
Presenters: Wilma Bainbridge, Timothy Brady, Gabriel Kreiman, Nicole Rust, Morgan Barense, Nicholas Turk-Browne
Symposium Description
Classic accounts of how the brain sees and remembers largely describes vision and memory as distinct systems, where information about the content of a scene is processed in the ventral visual stream (VVS) and our memories of scenes past are processed by independent structures in the Medial Temporal Lobe (MTL). However, more recent work has begun to challenge this view by demonstrating interactions and dependencies between visual perception and memory at nearly every stage of the visual processing hierarchy. In this symposium, we will present a series of cutting edge behavioural and neuroscience studies that showcase an array of crossmethodological approaches (psychophysics, fMRI, MEG, single unit recording in monkeys, human E-CoG) to establish that perception and memory are part of a shared, bidirectional, interactive network. Our symposium will begin with Caitlin Mullin providing an overview of the contemporary problems associated with the traditional memory/perception framework. Next, Wilma Bainbridge will describe the factors that give rise to image memorability. Tim Brady will follow with a description of how the limits of encoding affect visual memory storage and retrieval. Gabriel Kreiman will focus on how our brains interpret visual images that we have never encountered before by drawing on memory systems. Nicole Rust will present evidence that one of the same VVS brain areas implicated in visual object recognition, monkey IT cortex, also reflects visual memory signals that are well-aligned with behavioral reports of remembering and forgetting. Morgan Barense will describe the transformation between the neural coding of low level perceptual to high level conceptual features in one brain area that lies within the MTL, perirhinal cortex. Finally, Nick Turk-Browne will describe the role of the hippocampus in generating expectations that work in a top-down manner to influence our perceptions. Our symposium will culminate with a discussion focused on how we can develop an integrative framework that provides a full account of the interactions between vision and memory, including extending state-of-the art computational models of visual processing to also incorporate visual memory, as well as understanding how dysfunction in the interactions between vision and memory systems lead to memory disorders. The findings and resulting discussions presented in this symposium will be targeted broadly and will reveal important considerations for anyone, at any level of their career (student, postdoc or faculty), interested in the interactions between visual perception and memory.
Presentations
Memorability – predicting memory from visual information, and measuring visual information from memory
Speaker: Wilma Bainbridge, National Institute of Mental Health
While much of memory research focuses on the memory behavior of individual participants, little memory work has looked at the visual attributes of the stimulus that influence future memory. However, in recent work, we have found that there are surprising consistencies to the images people remember and forget, and that the stimulus ultimately plays a large part in predicting later memory behavior. This consistency in performance can then be measured as a perceptual property of any stimulus, which we call memorability. Memorability can be easily measured in the stimuli of any experiment, and thus can be used to determine the degree previously found effects could be explained by the stimulus. I will present an example where we find separate neural patterns sensitive to stimulus memorability and individual memory performance, through re-analyzing the data and stimuli from a previously published fMRI memory retrieval experiment (Rissman et al., 2010). I will also show how memorability can be easily taken into account when designing experiments to ask fundamental questions about memory, such as – are there differences between the types of images people can recognize versus the types of images people can recall? I will present ways for experimenters to easily measure or control for memorability in their own experiments, and also some new ways quantify the visual information existing within a memory.
The impact of perceptual encoding on subsequent visual memory
Speaker: Timothy Brady, University of California San Diego
Memory systems are traditionally associated with the end stages of the visual processing sequence: attending to a perceived object allows for object recognition; information about this recognized object is stored in working memory; and eventually this information is encoded into an abstract long-term memory representation. In this talk, I will argue that memories are not truly abstract from perception: perceptual distinctions persist in memory, and our memories are impacted by the perceptual processing that is used to create them. In particular, I will talk about evidence that suggests that both visual working memory and visual long-term memory are limited by the quality and nature of their perceptual encoding, both in terms of the precision of the memories that are formed and their structure.
Rapid learning of meaningful image interpretation
Speaker: Gabriel Kreiman, Harvard University
A single event of visual exposure to new information may be sufficient for interpreting and remembering an image. This rapid form of visual learning stands in stark contrast with modern state-of-the-art deep convolutional networks for vision. Such models thrive in object classification after supervised learning with a large number of training examples. The neural mechanisms subserving rapid visual learning remain largely unknown. I will discuss efforts towards unraveling the neural circuits involved in rapid learning of meaningful image interpretation in the human brain. We studied single neuron responses in human epilepsy patients to instances of single shot learning using Mooney images. Mooney images render objects in binary black and white in such a way that they can be difficult to recognize. After exposure to the corresponding grayscale image (and without any type of supervision), it becomes easier to recognize the objects in the original Mooney image. We will demonstrate a single unit signature of rapid learning in the human medial temporal lobe and provide initial steps to understand the mechanisms by which top-down inputs can rapidly orchestrate plastic changes in neuronal circuitry.
Beyond identification: how your brain signals whether you’ve seen it before
Speaker: Nicole Rust, University of Pennsylvania
Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed variants of a single-exposure visual memory task designed to measure the rates of forgetting with time and the robustness of visual memory to a stimulus parameter known to also impact IT firing rates, image contrast. We found that a strict interpretation of the repetition suppression hypothesis could not account for the monkeys’ behavior, however, a weighted linear read-out of the IT population response accurately predicted forgetting rates, reaction time patterns, individual differences in task performance and contrast invariance. Additionally, the linear weights were largely all the same-sign and consistent with repetition suppression. These results suggest that behaviorally-relevant memory information is in fact reflected in via repetition suppression in IT, but only within an IT subpopulation.
Understanding what we see: Integration of memory and perception in the ventral visual stream
Speaker: Morgan Barense, University of Toronto
A central assumption in most modern theories of memory is that memory and perception are functionally and anatomically segregated. For example, amnesia resulting from medial temporal lobe (MTL) lesions is traditionally considered to be a selective deficit in long-term declarative memory with no effect on perceptual processes. The work I will present offers a new perspective that supports the notion that memory and perception are inextricably intertwined, relying on shared neural representations and computational mechanisms. Specifically, we addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in the perirhinal cortex of the MTL. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully-specified object concepts through the integration of their visual and conceptual features.
Hippocampal contributions to visual learning
Speaker: Nicholas Turk-Browne, Yale University
Although the hippocampus is usually viewed as a dedicated memory system, its placement at the top of, and strong interactions with, the ventral visual pathway (and other sensory systems) suggest that it may play a role in perception. My lab has recently suggested one potential perceptual function of the hippocampus — to learn about regularities in the environment and then to generate expectations based on these regularities that get reinstated in visual cortex to influence processing. I will talk about several of our studies using high-resolution fMRI and multivariate methods to characterize such learning and prediction.