Attention: Neural, objects, models

Talk Session: Saturday, May 17, 2025, 8:15 – 9:45 am, Talk Room 1

Talk 1, 8:15 am

Revisiting Visual Awareness: No Evidence for Levels of Processing

Aytac Karabay1 (), Daryl Fougnie1; 1Department of Psychology, New York University Abu Dhabi

There is debate about whether awareness during visual perception occurs abruptly (all-or-none) or gradually. One influential view is the levels of processing (LOP) theory, which states that the nature of visual awareness depends on the stimulus processing level. According to LOP, low-level stimuli (e.g., color) evoke gradual awareness, while high-level stimuli, such as object identity (e.g., letter), elicit abrupt, all-or-none perception. A critical source of evidence supporting LOP is that self-reported perceptual clarity measures reveal more intermediate values of perceptual clarity for low than high-level stimuli. Here, we provide several pieces of evidence inconsistent with this theory. First, previous studies confound stimulus levels with ‘category-flatness.’ Does increased perceptual clarity of X versus blue reflect that a noisy perception of X is often perceived as X due to large category priors for letter stimuli? Experiment 1 showed that when the perceptual clarity of a high-level stimulus set without meaningful category boundaries (morphed faces) was tested, perceptual clarity was more gradual than that of low-level stimuli. Second, by varying foil-target similarity, we show that the assumption underlying perceptual clarity measures—that they measure perceptual clarity rather than the difficulty of perceptual judgments—is incorrect. Finally, we address a significant issue in the existing literature: performance across different stimuli is often not equated. This lack of equivalence can skew perceptual clarity ratings, as high-level stimuli typically yield better performance, potentially leading to fewer intermediate ratings in categorical stimuli due to high confidence. To rectify this, we equated task performance across stimuli using a staircase method (Experiment 2). We found that the gradualness of perceptual clarity was consistent across all stimulus types, rejecting the notion of distinct awareness pathways for high- versus low-level stimuli. Ultimately, our results suggest that differences between gradual and all-or-none perception arise largely from methodological properties rather than levels of processing.

Talk 2, 8:30 am

Purely voluntary shifts of object-based attention can be functionally identified and characterized via fMRI and MVPA

David H Hughes1, Adam S Greenberg1; 1Medical College of Wisconsin and Marquette University

We have previously used multivariate pattern analysis (MVPA) and fMRI to compare cued and non-cued (purely voluntary) shifts of spatial attention (Gmeindl et al., 2016). Here, we hypothesized that these methods could be applied to object-based attention (OBA), and that differential activation for cued and non-cued shifts would be observed in attentional control regions. To test this, 17 healthy adults viewed a series of overlapping faces and houses while detecting an infrequently-appearing target face and target house. During fMRI, participants completed six runs, each comprising both a cued and a non-cued block. A thin, colored frame provided shift/hold instructions during cued blocks but was uninformative during non-cued blocks. We used leave-one-run-out cross-validation to train (cued blocks only) and test (cued and non-cued blocks) a support vector machine to determine participants’ attentional locus. The output was scaled to a probability (i.e., probability attending house) which allowed us to index shifts at the onset of rapid probability changes. We then compared activations time-locked to these shift indices within published ROI coordinates (Gmeindl et al., 2016). “False” shifts were identified during cued blocks that occurred in the absence of a cue. Comparing cued, false, and purely voluntary (i.e., during non-cued blocks) shifts, we observed differences in right supramarginal gyrus (rSMG) and left precuneus (lPreC). In rSMG, activation was significantly reduced for cued shifts compared to false and non-cued shifts from -3 s to 0 s (p’s < .027). In lPreC, activation for false shifts was significantly elevated compared to cued and non-cued shifts from -4.5 s to -1.5 s (p’s < .047). Thus, rSMG reflects successful interpretation of an external shift cue while lPreC reflects top-down reorienting of attention. Our results demonstrate that purely voluntary shifts of OBA can be identified and tracked within parietal cortex in the absence of external shift cues.

This work was supported by grants BCS-2122866 from NSF and T32EY014536 from NIH NEI (the contents of this project are solely the responsibility of the authors and do not necessarily represent the official views of the NEI or NIH).

Talk 3, 8:45 am

Distractor intrusions: a brief and highly reliable measure of individual differences in the speed of attention

Alon Zivony1 (), Claudia von Bastien1, Rachel Pye2; 1University of Sheffield, 2University of Reading

How quickly we attend to objects plays an important role in navigating the world, especially in dynamic and rapidly changing environments (e.g., a busy street). Reaction times (RTs) in visual search tasks have often been used as an intuitive proxy of this ability. However, such measures are limited by inconsistent levels of reliability and the multitude of non-attentional factors that affect an individual’s RT. Here, we present an alternative method of studying individual differences in the speed of attention. Specifically, we employ rapid serial visual presentation (RSVP) tasks, where a target is presented for brief durations and embedded among multiple distractors. Previous research showed that the distractor intrusions, that is, reports of an adjacent distractor instead of the target, is associated with the speed of attention. Here, we explored the validity and reliability of individual differences in people’s rate of distractor intrusions. In three experiments, we found that intrusion rates predict overall RTs in simple visual search tasks, but emerge independently from measures of attentional control, reading speed, and another well-known limitation in temporal attention (the attentional blink). Moreover, our findings (N=100) show that an individual’s intrusion rate can be measured with very high reliability (>.90) within a very short (5-minute) session, both within a single session and between two sessions a week apart. These findings show that the distractor intrusion paradigm is a useful tool for research into individual differences in the temporal dynamics of attention. Links to a downloadable an easily executable distractor intrusion experiment are provided to facilitate such future research.

Talk 4, 9:00 am

Eye tracking reveals the efficacy of object-based attention at filtering out disproportionately salient foveal distractors

Lasyapriya Pidaparthi1 (), Frank Tong1; 1Vanderbilt University

Visual attention helps people prioritize task-relevant information. Covert spatial attention subtly increases the perceived contrast of an attended stimulus by ~4%, an effect also observed before goal-directed eye movements via pre-saccadic attention. Attention can thus modestly boost signal strength and improve perceptual performance (Li, Hanning, & Carrasco, 2021). However, it remains unclear how attention interacts with contrast when distracting information dynamically overlaps with an attended target. We have previously shown that object-based attention (OBA), as assessed by eye movements, effectively filters out the presence of an overlapping distractor object (Pidaparthi & Tong, VSS 2024). How then might OBA be impacted as a function of relative target-distractor contrast? We examined this question across two experiments. In Experiment 1, participants attended to one of two naturalistic objects (face, flower) that followed pseudorandom, minimally correlated trajectories while remaining largely overlapping, while monitoring for brief spatial distortions of the attended object. We tested five target-distractor contrast levels: 50% vs. 50%, 33:67, 25:75, 17:83, and 10:90. To measure the efficacy of OBA, we used a sliding-window correlation analysis and evaluated gaze-following of the attended object. We observed nearly complete filtering of the distractor up to extreme target-distractor contrast ratios of 17:83, beyond which filtering efficacy dropped (mean r: 0.571, 0.564, 0.545, 0.490, 0.288). Detection task performance, in comparison, deteriorated more rapidly, so attention-based gaze-following was more robust. In Experiment 2, we substituted the irrelevant object with a Gabor stimulus that underwent brief bursts of drifting motion (4 Hz for 500ms). Attentional filtering was unperturbed by the extraneous motion of the Gabor up to contrast ratios of 10:90. Overall, we show via eye movements that OBA effectively filters out complex motion signals from a distractor across a range of contrast levels, weakening only when the distractor salience exceeds the target salience by a factor of 5-fold.

This research was supported by NEI grants R01EY035157 to FT and P30EY008126 to the Vanderbilt Vision Research Center.

Talk 5, 9:15 am

Taking our attention model out for a walk: how a model built from desktop experiments performs in the real world

Chloe Callahan-Flintoft1, Brad Wyble2, Joyce Tam2; 1US Army DEVCOM Army Research Laboratory, 2The Pennsylvania State University

Visual attention is a collection of complicated mechanisms by which a subset of information is selected for prioritization or enhanced processing. To isolate individual mechanisms, the field has primarily relied on tightly controlled experimental paradigms with 2-dimensional, abstracted stimuli and participants’ head and gaze position restricted. However, in the real world, the attentional system operates holistically with body, head, and eye movements within a visually dense and immersive space. The current work proposes computational modeling as an efficient way to encapsulate lab findings and test them collectively in more ecologically valid environments. To do this, RAGNAROC (Wyble et al., 2020), a model of reflexive attention, parameterized on a wide variety of behavioral and electrophysiological findings, was used to simulate attentional deployment of participants conducting a foraging task in virtual reality. Results showed that the model was able to predict gaze location within 15 degrees of visual angle and next target selection 37% over chance. Moreover, attentional activation in the model significantly correlated with a participant’s ability to respond to abrupt onset targets (r = -.326, p < .001). Finally, the model was used to predict behavior in an outdoor visual search task where participants wore mobile eye trackers. Together, these results demonstrate that lab-based models can have predictive power in more ecologically valid contexts and can serve as a systematic way of exploring how cognitive mechanism facilitate daily life.

Talk 6, 9:30 am

Visual Awareness Positivity: a Novel Neural Correlate of Consciousness

Ugo Bruzadin Nunes1 (), Angelica Nicolacoudis2, Adi Sarig3, Nicholas Fish1, Liad Mudrik2, Michael Pitts2, Aaron Schurger1; 1Chapman University, 2Reed College, 3Tel Aviv University

Inattentional blindness (IB), the failure to perceive salient stimuli when attention is directed elsewhere, challenges assumptions about visual awareness. Despite extensive research on neural correlates of consciousness (NCCs), mechanisms underlying IB for complex stimuli remain poorly understood. Here, we leveraged a three-phase no-report paradigm to investigate neural signatures of conscious awareness in IB (using EEG), minimizing confounds like motor and decision-making processes. Participants performed a challenging peripheral attention task while simultaneously being presented centrally with faces, houses, or visual noise. At each phase’s end, they were probed about various presented objects, including faces and houses. Approximately 45% of participants exhibited IB in phase 1. In phase 2, participants were informed about the presence of faces and houses but continued the same task. In phase 3, they were instructed to ignore the peripheral task and instead identify the central stimuli (faces, houses, noise) in a three-alternative forced-choice (3AFC) task. Non-parametric cluster analysis of event-related potentials (ERPs) contrasting phase 2 with phase 1, controlling for noise trials, identified two distinct neural components: Visual Awareness Negativity (VAN, 180–220 ms) and a novel Visual Awareness Positivity (VAP, 250–400 ms) characterized by bilateral-posterior positive and frontal-central negative differences. The P3b/P300, a traditional NCC marker, was absent during phases 1 and 2 but present during phase 3. Multivariate pattern analysis (MVPA) assessing temporal generalization of decoders showed stable above-chance decoding of seen vs. unseen trials, during the 250–400 ms post-stimulus window. Time-frequency cluster analysis revealed significant differences in theta, alpha, and beta ranges, implicating these rhythms in visual consciousness. These findings replicate previous results regarding the VAN and identify novel markers of conscious perception, including the VAP, meta-stable decoding, and differential theta power, in no-report conditions. This study deepens our understanding of visual awareness and highlights novel neural markers as potential NCCs in no-report paradigms.

Grant 30266 from the Templeton World Charity Foundation