Visual memory: General
Talk Session: Saturday, May 17, 2025, 2:30 – 4:15 pm, Talk Room 1
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 2:30 pm
Information integration in working memory
Qihang Zhou1, Jinglan Wu1, Tengfei Wang1, Yuzheng Hu1, Fuying Zhu1, Hui Zhou1, Mowei Shen1, Zaifeng Gao1; 1Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
Understanding how the cognitive system integrates discrete sensory inputs into coherent representations is a central question in psychology. Working memory (WM) plays a key role in this process, yet current WM models (e.g., Baddeley, 2012; Cowan, 2001) primarily focus on storage buffers and executive functions, often overlooking information integration. We propose a novel component, the integration buffer, which is specifically responsible for integrating elemental information into unified representations through a compression mechanism. We conducted a total of five studies. Studies 1-3 provided evidence for the existence of the integration buffer. In Study 1, 176 participants completed two integration tasks alongside ten other WM tasks representing the established components. Confirmatory factor analysis indicated that WM integration could not be attributed to any existing component (e.g., visuospatial sketchpad, central executive, or episodic buffer), supporting the distinctiveness of the integration buffer. Study 2 demonstrated that cognitive load manipulations on the visuospatial sketchpad, central executive, and episodic buffer had no impact on WM integration, further supporting the buffer’s independence. In Study 3, using multi-modal magnetic resonance imaging (MRI), we provided complementary evidence from brain activity patterns. Studies 4 and 5 further clarified the functional role of the integration buffer through eye tracking and event-related potentials. We found that participants directed more attention to the center of the integrated representation (a more centralized fixation pattern; study 4) and had lower memory load (reduced contralateral delay activity; study 5) during WM maintenance compared to memorizing discrete items, suggesting a compressive integration process. Together, these findings support the existence of a distinct integration buffer, which integrates discrete elements into a coherent representation via a compression mechanism.
Talk 2, 2:45 pm
Beyond Swaps: How Working Memory Combines Gist and Item Information
Chattarin Poungtubtim1 (), Chaipat Chunharas1,2, Timothy Brady3; 1Cognitive Clinical & Computational Neuroscience Lab, Faculty of Medicine, Chulalongkorn University, 2Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, 3Department of Psychology, UCSD
Working memory representations of multiple items are not encoded independently, but interact with each other across different hierarchical levels of representation. For example, individual item representations can be biased toward gist-level representations (Chunharas & Brady, 2023). Yet their potential benefits for memory performance remain unclear. While some researchers hypothesize that combining gist and item-based representations leads to more precise memory with increased gist-based bias, quantitative evidence supporting this hypothesis is lacking. To address this gap, we developed a computational model that implements a weighted summation of the familiarity signals that contribute to gist and single-item representations. Our model predicts the full shape of error distributions in tasks where people use gist and item memories. This modeling revealed that previously observed precision-bias trade-off can be explained by increased weighting of gist representations when stimuli are highly similar. The model further predicted that within-trial stimuli would show variable precision depending on their similarity to the gist-level representation. We validated these predictions by fitting our model to existing experimental data (Utochkin & Brady, 2020). As predicted, we found an inverse relationship between gist representation weighting and stimulus similarity. Critically, reanalysis of the data confirmed that stimuli more similar to the gist showed better precision than dissimilar items within the same trial, aligning with our model's predictions. To determine whether chunking occurs through representation combination or swap-like replacement of individual items, we compared our model against two alternatives: a gist-only model and an individual item-only model. Our model outperformed both alternatives, suggesting that chunking in working memory operates through adaptive combination of single-item and gist-level representations rather than through simple replacement. Overall, the model provided a unified account of how working memory balances between precision and compression through dynamic weighting of representation combination. Furthermore, the model demonstrated a connection between chunking and ensemble perception.
Talk 3, 3:00 pm
The spatial, categorical, and verbal representations underlying visual working memory in the human brain
Thomas Christophel1,2, Andreea-Maria Gui1,2, Carsten Allefeld3, Vivien Chopurian1,2, Joana Seabra1,2; 1Humboldt Universität zu Berlin, 2Berlin Center of Advanced Neuroimaging, 3City, University of London
The division between visual and non-visual storage is foundational to the conception of visual working memory. Recent work, however, suggests that working memory relies on multiple concurrently held representations across multiple cortical regions. These regions are believed to enact a division of labor where some regions harbor near-veridical representations of memorized stimuli while other representations capture the stimulus in abstract form. Here, we demarcate this division of labor using fMRI and multivariate encoding modelling. In a large sample (N = 40), we measure patterned BOLD activity for cortical representations of memorized orientations, spatial locations, and words that can be used to describe orientations (e.g., “vertical”). We identify markers in patterned cortical activity identifying the spatial, categorical and verbal representations underlying visual working memory. For spatial representations, we find cross-classification between location and orientation working memory explaining a substantial part of stimulus-specific activity even in visual cortical regions. These spatial representations are subject to retinotopic distortions that are most pronounced in anterior regions and later in the delay. Categorical and veridical representations appear to coexist independently in anterior cortical regions, while in visual regions, veridical, sensory-like encoding models outperform categorical encoding models. Finally, we find cross-classification between orientation representations and spatial language, showing that neural activity during visual working memory in part mimics activity during language processing. In this way, we demonstrate the unique contribution of a diverse set of cortical regions to visual working memory storage.
This work was supported by DFG Emmy Noether Research Group Grant CH 1674/2-
Talk 4, 3:15 pm
Visual Working Memory Can Represent Items Before They Appear
Reut Peled1 (), Roy Luria1,2; 1School of Psychological Science, Tel Aviv University, 2Sagol School of Neuroscience, Tel Aviv University
One of our main characteristics as humans is to generate predictions. Still, the mechanisms supporting this ability remain poorly understood. This study demonstrates that Visual Working Memory (VWM) plays a crucial role in generating predictions by creating future-representations: actively representing stimuli even before they appear. Across four EEG experiments (N=90), participants performed a change-detection task involving moving objects. In the predictable condition, objects moved toward each other, forming converged objects, while in other control conditions, objects moved independently. We monitored the contralateral delay activity (CDA), a neural marker of VWM’s sensitivity to the amount of visual information. Our key evidence for future-representations in VWM was an increased CDA in the predictable condition during objects’ movement - before perceiving the converged objects. Control experiments ruled out alternative explanation for the increased CDA activity such as trajectory anticipation or representing the converging event itself. We used machine-learning analyses to provide evidence that the converged objects can be decoded from the EEG signal during the movement and before they converged. First, a supervised SVM decoder classified EEG activity from two perceptually identical conditions: in both conditions the objects moved towards each other, but only in one condition the objects converged while in the other, the objects crossed and continued moving independently (“crossing”). Although the two conditions were perceptually identical until the meeting event, and both meeting-results were predictable, classification accuracy exceeded 70%, suggesting participants represented additional information before the objects converged. Second, we used an unsupervised clustering algorithm, and demonstrated that the converging condition was classified as resembling to an already converged-items condition relative to the crossing condition (with ~70% accuracy), even during the movement before the meeting, when the crossing condition was perceptual identical to the merging condition. These findings provide strong evidence that VWM generate fututre-representations as part of predictive cognition.
Talk 5, 3:30 pm
Bridging Bayesian and Representational Theories of Memory to Predict Memory Biases
Anxin Miao1, Timothy Brady2, Maria Robinson3; 1University of Illinois of Urbana Champaign, 2University of California San Diego, 3University of Warwick
Understanding how people integrate gist and item-specific information in memory is crucial for understanding how memory changes with time and how people make memory-based decisions under conditions of uncertainty. In the current work, we investigated how people balance these two types of information in color memory tasks. We combined a Bayesian approach with prior signal detection-based models to predict gist memory based on performance in an ensemble and individual item memory task. Our model integrates gist and item-specific information by weighting them based on their relative discriminability, making predictions about how participants will rely on each source when asked to recall an item where both gist and item-specific information could be relevant. Participants completed three tasks: an Ensemble Task to estimate discriminability of gist (d'), an Individual Item Task to assess item-specific memory (d'), and a Gist Bias Task to evaluate memory recall for a specific item under different levels of offset between gist and item-specific colors. Our model makes parameter-free predictions of participants’ performance in the Gist Bias Task using d' estimates derived from the other two tasks, combined via Bayesian weighting. Despite not being fit to the data from this task at all, this model captured the complete distributions of people’s memory errors in the Gist Bias task as well as key qualitative trends in the data, such as the proportional shift of item-specific memory towards memory gist. Model comparisons and a permutation test showed that the model captured meaningful individual differences in memory integration (p<.001). These findings highlight the predictive power of bridging Bayesian models with representational theories of memory, offering a comprehensive framework for understanding how gist and item-specific information combine to create memory biases.
Talk 6, 3:45 pm
Early visual cortex is recruited to act as a comparison circuit between mental representations and visual inputs
Maria V. Servetnik1,2, Rosanne L. Rademaker1; 1Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany, 2Vrije Universiteit Amsterdam, Amsterdam, Netherlands
Imagine briefly losing your friend in a crowd: As you scan one face after another (perception), you hold your friend's face in mind (short-term memory) and compare it against each new input (visual search). This process spans multiple cognitive domains and requires mental representations of visual information at every step in order to find your friend again. Early visual cortex (EVC) has been implicated in maintaining such mental visual representations, both in the presence and absence of concurrent sensory input. It remains an outstanding question why EVC - the primary processing site of visual input - is also used for representing mental content. We hypothesize that EVC is recruited to serve as a comparison circuit, matching mental content to incoming sensory information in a visual format native to EVC. To test this, we collected fMRI data while participants (N=5) remembered the direction of a coherent random dot motion (RDM) stimulus for 8 minutes. During this period, participants saw 48 new RDM stimuli with independent motion directions. Before each of these probes, participants were cued to either compare its motion direction to the one they were holding in mind, or to withhold a comparison. Results show that BOLD responses for comparisons were higher than for non-comparisons throughout EVC. Importantly, a decoder trained on independently collected visual localizer data showed that making a comparison also rendered the motion direction held in mind significantly more decodable. This suggests that EVC is recruited to represent mental content especially when such content needs to be compared to visual input. The tentative role of EVC as a comparison circuit highlights that visual information temporarily held in mind is typically stored for use, and that a broader cognitive context must be considered in order to uncover how and why the brain represents mental content.
Talk 7, 4:00 pm
Traveling Waves of human neocortical activity coordinate visually-guided behaviors
Edward Ester1, Canhuang Luo2, Thane Houghton1; 1University of Nevada, Reno, 2Shenzhen University
Cortical traveling waves, or global patterns of activity that extend over several centimeters of the cortical surface, are a key mechanism for guiding the spatial propagation of neural activity and computational processes across the brain. Recent studies have implicated cortical traveling waves in successful short- and long-term memory encoding, storage, and retrieval. However, human memory systems are fundamentally-action oriented: eventually, the contents of memory must be utilized to produce appropriate behaviors. Cortical traveling waves could contribute to the production and control of memory-guided behaviors by flexibly routing information between brain areas responsible for storing memory content and brain areas responsible for planning and executing actions. Here, using short-term memory as a test case, we report evidence supporting this possibility. By applying image-based analyses to published human EEG studies, we show that the initiation of a memory-guided behavior is accompanied by a low-frequency (2-6 Hz) feedforward (occipital to frontal) traveling wave that predicts intra- and inter-individual differences in response onset, while the termination of a memory-guided behavior is followed by a higher frequency (14-32 Hz) feedback (frontal-to-occipital) traveling wave. Neither waveform could be explained by nuisance factors, including passive volume conduction, feedforward propagation of visually-evoked responses, or eye movements. Moreover, both waveforms required an overt behavior: when participants selected task-relevant memory content and prepared but did not yet execute an action based on this content, neither waveform was observed. Our findings suggest a role for traveling waves in the generation and control of memory-guided behaviors by flexibly organizing the timing and direction of interactions between brain regions involved in memory storage and action.
NSF 2050833