Visual Search
Talk Session: Saturday, May 17, 2025, 5:15 – 7:00 pm, Talk Room 2
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 5:15 pm
Predictive and reactive distractor suppression relies on integrated attentional mechanisms
Oscar Ferrante1, Ole Jensen2,3, Clayton Hickey1; 1Centre for Human Brain Health, University of Birmingham (UK), 2Oxford Centre for Human Brain Activity, University of Oxford (UK), 3Department of Experimental Psychology, University of Oxford (UK)
Visual attention is significantly influenced by statistical regularities in the environment, with spatially predictable distractors being proactively suppressed. The neural mechanisms underlying this proactive suppression remain poorly understood. In this study, we employed magnetoencephalography (MEG) and multivariate decoding analysis to investigate how predicted distractor locations are proactively represented in the human brain. Participants engaged in a additional-singleton visual search task, identifying a target stimulus while ignoring a colour-singleton distractor when present. Crucially, the distractor appeared more frequently on one side of the visual field, creating a statistical learning spatial prediction. Our results revealed that distractor locations were encoded in temporo-parietal brain regions prior to stimulus presentation, supporting the hypothesis that proactive suppression guides visual attention away from predictable distractors. The neural activity patterns corresponding to this pre-stimulus distractor suppression extended to post-stimulus activity during late attentional stages (~200 ms), indicating an integrated suppressive mechanism. This generalization from pre-stimulus to post-stimulus was absent in the early sensory stages (~100 ms), suggesting that post-stimulus suppression is not merely a continuation of sustained proactive suppression. Instead, the same suppressive mechanism is activated at two distinct stages. These findings establish a mechanistic link between proactive and reactive suppression of predictable distractors, elucidating their shared and unique contributions to attentional processes.
Talk 2, 5:30 pm
Balancing Exploration and Exploitation in Visual Search: Insights from Behavioral and Computational Models
Haokui Xu1,2 (), Xutao Zheng2, Jianzhe Xu2, Jingjing Hu3, Jifan Zhou2, Mowei Zhen2; 1Zhejiang University of Technology, 2Zhejiang Universisty, 3Zhejiang International Studies University
Goal-directed attention in visual search tasks has been extensively studied, but the role of exploration—the complementary aspect of attention—remains less understood. In complex and unfamiliar scenarios encountered in daily life, individuals often gather information to simplify the situation and reduce the search space before locating a target. Our study investigated this process through three experiments involving a digit search task, where participants searched for a target digit within sequences arranged either regularly or randomly. Behavioral results indicated that search times were shorter for regular sequences compared to random sequences and increased logarithmically with set size. Eye-movement analysis revealed a similar pattern, as the number of fixations aligned with search times, suggesting that participants leveraged regularities to narrow the search space efficiently, thereby locating the target with fewer fixations. To further investigate attention selection strategies, we developed computational models. The optimal model demonstrated that attention flexibly balances two strategies based on scenario complexity. When the search space is large, attention prioritizes locations that were most helpful in narrowing the search space (with the highest expected information gain), even if these locations are less likely to contain the target. Conversely, when the search space is reduced, attention focuses on the most probable target locations. This dynamic selection mechanism highlights how the visual system balances exploration and exploitation to achieve optimal outcomes, enabling humans to navigate complex environments despite limited cognitive resources.
Zhejiang Provincial Philosophy and Social Sciences Planning Project (23NDJC037Z)
Talk 3, 5:45 pm
Efficient Heuristic Decision Processes in Overt Visual Search
Anqi Zhang1,2 (), Wilson S. Geisler1; 1The University of Texas at Austin, 2University of California, Santa Barbara
Simple heuristic decision rules that perform close to the Bayes optimal rule are likely candidates for decision processes in biological systems because they are the ones most likely to be found by natural selection and learning over a life span. In covert search, the Bayes-optimal decision rule takes into account the prior probability at each potential target location, weighs the response at that location by the local detectability (d'), and then picks the location with the maximum posterior value. Recently, we showed that in covert search a wide range of simple decision heuristics closely approach optimal accuracy, even though these heuristics largely ignore the actual variation in prior probability and detectability across the visual field. For example, even for targets where d' falls rapidly with retinal eccentricity, assuming a constant d' over the search area has a negligible effect on overall search accuracy. We extended this analysis to overt search, where the Bayes-optimal searcher uses each target’s specific d’ map for both updating the posterior probability map and selecting fixations. We found that (1) Changes in heuristic parameters and stopping criteria cause substantial tradeoffs between the overall search accuracy and mean number of fixations. (2) Many heuristics with a fixed, foveated d' map are highly efficient, but few heuristics with a constant-valued d' map are highly efficient. Specifically, we defined the efficiency of a heuristic rule as the ratio of the overall search accuracy of that heuristic searcher to that of the ideal searcher, when the heuristic searcher is required to make the same number of fixations for each stimulus as the ideal searcher. Overall, our findings uncover several biologically plausible and testable near-optimal heuristics for overt visual search.
Supported by NIH grants EY11747 and EY024662.
Talk 4, 6:00 pm
Predicting human foraging behaviour in 3D: A computational approach
Manjiri Bhat1 (), Russell A Cohen Hoffing2, Anna Hughes1, Alasdair Clarke1; 1University of Essex, 2DEVCOM Army Research Laboratory
Spatial exploration is a key cognitive ability for humans and other animals, allowing them to find food and other resources. Visual foraging paradigms, where participants search for multiple items in a two-dimensionally bound environment, allow us to explore the strategies used in spatial exploration. In many previous studies, the key outcome measurement has been differences in item selection patterns based on run behaviour. In our paradigm, we adopt a novel approach, where participants (in an avatar, first person view) forage items by moving in a semi-naturalistic three-dimensional environment that accounts for depth, body orientation, time (pauses), and rotation. We also test participants under different conditions to investigate how factors such as stamina (an “energy bar” that reduces with movement at a rate based on low, medium or high stamina), item scarcity (high or low item availability), and knowledge of item locations and availability (birds-eye-view map or no-map) influences exploration behaviour and foraging strategy. With an aim to better characterise underlying cognitive processes of exploration behaviour and foraging strategy in this unconstrained foraging task, we have developed and fit a generative model which formally predicts sequences of item selections using latent parameters such as proximity (selecting items closest to the previous selection) and momentum (selecting items in a forward motion or doubling-back). Our results suggest that proximity is highly predictive of behaviour, even when stamina and item scarcity change. However, proximity is less predictive when items are scarce. In addition, momentum (i.e., forward motion) is a greater predictor when participants do not have a map or have a map with high stamina. We conclude that our model is able to accurately predict behaviour on a target-by-target basis in an unconstrained foraging paradigm and discuss future directions for improving the model by incorporating pauses and rotation.
Economic and Social Research Council (ESRC), US DEVCOM Army Research Laboratory
Talk 5, 6:15 pm
Computer cursor trajectories are predictive of upcoming success in visual search
Audrey Siqi-Liu1 (), Sarah Malykke1, Kelvin Oie2, Dwight Kravitz1, Stephen Mitroff1; 1The George Washington University, 2DEVCOM Army Research Laboratory
Performance on visual search tasks—finding targets amongst distractors—is typically assessed with aggregate behavioral measures (e.g., response times, accuracy). Such measures are effective in their simplicity, but can obscure subtle dynamics in the visual decision-making processes that underlie search through a complex visual array. The current project explored the utility of cursor tracking data to reveal nuanced patterns of search behavior. Using a web-based implementation of a standard ‘Ts among Ls’ task and crowdsourced data collection, we recorded a large number of participant’s cursor behaviors during visual search (either through the movements of a computer mouse or trackpad). From the recorded cursor traces, we calculated time-resolved features such as trajectory length, speed, and dwell times on stimuli and non-stimuli areas, characterizing how these behaviors change over the course of search within each trial. Machine-learning models trained on these cursor features accurately predicted errors before they occurred (i.e., up to ~500-1000 ms prior to response). Additional analyses delineated which cursor features were particularly important to error prediction, with the rank order of importance changing as the trial progressed. These and other analyses (e.g., a detailed description of how each time-resolved feature varied in hit versus miss trials) will be discussed. Combining cursor tracking data with advanced computational methods, supported by crowdsourced data collection, we provide a richer narrative of behavior during visual search. This novel approach opens new research possibilities by demonstrating the resolution of behavioral data that can be collected via online platforms and how such fineness of measurement can provide more insight into the basic mechanisms of search and real-time error prediction. Practical applications range from understanding search strategy differences between individuals to characterizing optimal windows for computer-assisted detection in artificial intelligence applications.
W911NF-23-2-0210, W911NF-23-2-0097, W911NF-24-2-0188
Talk 6, 6:30 pm
Looking into working memory to verify potential targets during visual search
Sisi Wang1, Freek van Ede1; 1Vrije Universiteit Amsterdam
Finding what you are looking for is a ubiquitous task in everyday life that relies on a two-way comparison between what is currently viewed and internal search goals held in memory. Yet, despite a wealth of studies tracking visual verification behavior among the external contents of perception, complementary processes associated with visual verification among internal contents of memory remain elusive. Building on a recently established gaze marker of internal visual focusing in working memory, we tracked the internal-inspection process associated with confirming or dismissing potential targets during search. We show how we look back into memory when faced with external stimuli that are perceived as potential targets and link such internal inspection to the time required for visual verification. A direct comparison between visual verification among the contents of working memory or perception further revealed how verification in both domains engages frontal theta activity in scalp EEG, but also how mnemonic verification is slower to deploy than perceptual verification. This establishes internal verification behavior as an integral component of visual search, and provides new ways to look into this underexplored component of human search behavior.
This work was supported by an NWO Vidi Grant from the Dutch Research Council (14721) and an ERC Starting Grant from the European Research Council (MEMTICIPATION, 850636) to F.v.E. We thank Baiwei Liu and Anna van Harmelen for their useful discussions.
Talk 7, 6:45 pm
Individual working memory capacity predicts search performance in multiple colour search
Anna Grubert1 (), Ziyi Wang1, Courtney Turner1; 1Durham University
Visual search is guided by attentional templates, i.e., target representations that are assumed to be held in visual working memory (vWM). vWM capacity varies between individuals but is typically limited to 3-4 items. Similar capacity limitations have also been observed in visual search, but it is yet unclear whether they are directly linked to the individually limited resources of vWM. We tested this by correlating behavioural and electrophysiological markers of individual vWM capacity (Cowan’s k, CDA amplitudes) with behavioural and electrophysiological markers of search efficiency (accuracy rates, N2pc amplitudes) measured in change detection and visual search tasks, respectively. In each trial of the change detection task, participants were presented with a memory display containing one, two, or three coloured squares. After a retention period, they were shown a test display and had to decide whether it was identical to the memory display or contained a colour change. In the visual search task, participants searched for a target-colour bar amongst five differently coloured nontargets and had to indicate whether it had a horizontal or vertical orientation. One, two, or three possible target colours were cued at the beginning of each block. Results revealed significant load effects both in the change detection and search tasks with larger k values and CDA amplitudes and lower accuracy rates and N2pc amplitudes in higher- as compared to lower-load trials, respectively. More importantly, vWM indices predicted search performance at the individual level: Individuals with higher k values and larger CDA amplitudes produced greater search accuracy, larger N2pc amplitudes, and smaller load costs both in terms of search accuracy and N2pc amplitudes. These results suggests that individual search performance in multiple colour search directly depends on individual vWM limitations.
This work was funded by a research grant of the Leverhulme Trust (RPG-2020-319) awarded to AG.