Temporal Processing
Talk Session: Saturday, May 17, 2025, 10:45 am – 12:30 pm, Talk Room 2
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 10:45 am
The Speed Limit of Visual Perception: Bidirectional influence of image memorability and processing speed on perceived duration and memory recall
Martin Wiener1 (); 1George Mason University
Visual stimuli are known to vary in their perceived duration, with some stimuli engendering so-called “time dilation” and others “time compression” effects. Previous theories have suggested these effects rely on the level of attention devoted to stimuli, magnitude of the stimulus dimension, or intensity of the population neural response, yet cannot account for the full range of experimental effects. Recently, we demonstrated that perceived time is affected by the image properties of scene clutter, size, and memorability (Ma, et al. 2024), with the former compressing and latter two dilating duration. Further, perceived duration also predicted recall of images 24h later, on top of memorability. To explain the memorability effect, we found that a recurrent convolutional neural network (rCNN) could recapitulate the time dilation effect by indexing the rate of entropy collapse, or “speed”, across successive timesteps, with more memorable stimuli associated with faster speeds. Here, we replicate and extend these findings via three experiments (n=20ea.) where subjects performed a sub-second temporal bisection task using memorability stimuli with increasing memorability, but a constant speed (exp 1), increasing speed, but constant memorability (exp 2), or increasing in both (exp 3), each followed by a surprise memory test 24hr later. We found that either increasing memorability or speed alone led to time dilation effects, with faster/slower speeds shifting memory recall by 10% in either direction. However, when both metrics increased, memorability dilated time while speed compressed it, while still improving recall overall. These findings can be explained by a model wherein the visual system is tuned to a preferred speed for processing stimuli that scales with the magnitude of visual response, such that stimuli closer to this speed are dilated in time. Overall, these findings provide a new lens for interpreting time dilation/compression effects and how visual stimuli are prioritized at temporal scales.
Talk 2, 11:00 am
An illusion of time caused by repeated visual experience
Brynn E. Sherman1 (), Sami R. Yousif2; 1University of Pennsylvania, 2University of North Carolina, Chapel Hill
The feeling that something happened “only yesterday” makes us feel attached to it — as if it is as much a part of us as the present moment. The feeling that an event occurred long ago enhances our sense of nostalgia (and heightens our awareness that time is always passing, whether we like it or not). But how do we remember when we saw something? One obvious possibility is that, in the absence of explicit cues, we infer elapsed time based on memory strength. If a memory is fuzzy, it likely occurred longer ago than a memory that is vivid. Here, we demonstrate a robust illusion of time that stands in stark contrast with this prediction. In six experiments, we show that experiences which are visually repeated (and, consequently, better remembered) are counterintuitively remembered as having initially occurred earlier in time. This illusion is robust (amounting to as much as a 25% distortion in perceived time), consistent (exhibited by the vast majority of participants tested), stable across a variety of paradigms (e.g., when participants are asked to place seen items on a timeline and also when participants explicitly judge which of two items was seen earlier), immune to various experimental interventions (e.g., encouraging participants to pay attention to each specific presentation of an item), and applicable at the scale of ordinary day-to-day experience (occurring even when participants are tested over one full week). Thus, this “temporal repetition effect” may be one of the key mechanisms underlying why it is that people’s sense of time often diverges from reality.
Talk 3, 11:15 am
Synchronization of visual perception within the human fovea
Annalisa Bucci1,2,3 (), Marc Büttner1,2, Niklas Domdei4, Federica B. Rosselli1,2, Matej Znidaric1,3, Julian Bartram3, Tobias Gänswein3, Roland Diggelmann3, Martina De Gennaro1, Cameron S. Cowan1, Wolf Harmening4, Andreas Hierlemann3, Botond Roska1,2, Felix Franke1,2,3; 1Institute of Molecular and Clinical Ophthalmology Basel (IOB); 4031 Basel, Switzerland, 2University of Basel, Faculty of Science, 4031 Basel, Switzerland, 3Eidgenössische Technische Hochschule Zürich (ETH), Department of Biosystems Science and Engineering (D-BSSE); 4056 Basel, Switzerland, 4Rheinische Friedrich-Wilhelms-Universität Bonn, Department of Ophthalmology; 53127 Bonn, Germany
Precise timing of action potentials underpins the processing of visual information. Retinal ganglion cells (RGCs), the output neurons of the retina, encode visual information into action potentials that propagate to higher visual areas in the brain. Within the eye, the intraretinal segments of RGC axons remain unmyelinated, resulting in slow action potential propagation. The lengths of these unmyelinated axon segments are determined by their intraretinal trajectories, which are shaped by the anatomical organization of the human eye including the fovea — a specialized retinal region responsible for high-acuity vision. To achieve high-acuity vision in the foveal center (‘umbo’), all retinal circuitry, except for the photoreceptors, is displaced into a ring-like structure around the umbo. Consequently, axons originating from RGCs on the temporal side of the fovea must bend around the umbo, whereas nasal RGCs can connect straight to the optic disc. This organization causes neighboring photoreceptors in the center of our vision to connect to RGCs with dramatically different intraretinal axonal lengths. This raises the question: do differences in lengths lead to a temporal dispersion of the arrival times of visual information in the brain? To address this question, we measured human reaction times to single-cone photostimulation in the umbo. Reaction times were uniform across the central visual field. Using high-density microelectrode arrays (HD-MEAs) on human retinal explants, we recorded foveal RGC action potentials and found propagation speeds varied based on the location of RGC somas around the umbo. Axons originating temporal to the umbo exhibited more than 40% higher propagation speeds than those on the nasal side. Transmission electron microscopy revealed these higher speeds were associated with larger axon diameters. A model accurately predicted axonal paths and lengths, which strongly correlated with observed propagation speeds. These findings reveal a compensatory mechanism in the human retina that synchronizes visual perception.
Funded by SNSF (CRSII5_173728, CRSII5_216632; PCEFP3_187001; CRSK-3_220987, CRSK-3_221257; 310030_220209); ERC (neuroXscales, 694829); DFG (Ha5323/5-1; SPP2127 Ha5323/6-1); Carl Zeiss Foundation (HC-AOSLO); Sedinum Foundation. We thank Universitätsspital Basel, donors and their families for support.
Talk 4, 11:30 am
When a blunt event is perceived depends on its temporal profile
Ljubica Jovanovic1, Pascal Mamassian1; 1Laboratoire des Systèmes Perceptifs, ENS, PSL University, CNRS
When visual objects strike our retinas, they trigger a cascade of activity of different durations and delays along the visual processing hierarchy, making it difficult to predict exactly when they are perceived (Nishida & Johnston, 2002; Curr. Biol.). Here we investigate which temporal features of the visual stimulus are used to infer when an event is perceived. We presented two events in succession, each event being a pair of Gaussian blobs (1 dva) located at a fixed eccentricity on either side of fixation. We varied the delay between the events (800 or 1200 ms), and participants were instructed to reproduce this delay by pressing a key at the appropriate time after the second event. The orientation of the blobs was random for the first pair, and rotated by 120 degrees for the second. We manipulated the temporal profiles of the events to investigate when they are perceived. Contrast of the first event followed a Gaussian modulation. Contrast of the second event had either a Gaussian, or an asymmetrical temporal profile built from a weighted sum of two Gaussians shifted in time by 100 ms. These stimuli allowed us to independently vary the times of the maximum and mean contrasts. We found a strong effect of the temporal contrast modulation on the perceived time. Depending on the direction of the contrast distribution’s skewness, the second pair was perceived earlier (~80 ms) or later (~30 ms) relative to a pair with a symmetrical Gaussian contrast distribution and identical maximum contrast time. From this pattern of results, the information used for estimating when an event occurred cannot be a threshold intensity or the time of maximum contrast. Instead, the time when integrated contrast reaches a threshold appears relevant for when an event occurred (Amano et al., 2006; J. Neurosci.).
ANR grant no. ANR-22-CE28-0025
Talk 5, 11:45 am
The role of neural oscillations in visual hierarchy for duration perception
Yuko Yotsumoto1, Amirmahmoud Houshmand Chatroudi1; 1The University of Tokyo
‘Time’ is actualized within a sensory context, making it vulnerable to distortions arising from sensory organization. One such distortion is the tendency to overestimate the duration of visual flickers, a phenomenon known as flicker-induced time dilation (FITD). A decade of research has led to two predominant hypotheses for explaining this temporal illusion: subjective salience (Herbst et al., 2013) and neural entrainment (Hashimoto & Yotsumoto, 2018). However, evidence supporting the neural entrainment hypothesis, particularly through steady-state evoked potentials (SSVEPs)—oscillatory neural responses to regular flickers—has been inconsistent (Li et al., 2020). In this study, we employed the semantic wavelet-induced frequency tagging (SWIFT; Koenig-Robert & VanRullen, 2013) technique to investigate whether the cortical localization of SSVEPs within the visual hierarchy could account for the inconsistency between FITD and the entrainment hypothesis. Using SWIFT, we generated a set of flickers characterized by luminance-based, semantic-based, and combined luminance and semantic properties (hierarchical frequency tagging; Gordon et al., 2017). EEG results revealed that each flicker type elicited distinct SSVEP activation patterns in the occipitotemporal regions, indicating selective engagement of different levels of the visual hierarchy. However, the magnitude of FITD did not differ across flicker conditions. Furthermore, SSVEP amplitude in none of the conditions correlated with FITD. This clear dissociation between neural activation patterns and the extent of the flicker-induced illusion challenges the role of entrainment in explaining FITD. Notably, the FITD magnitude observed in the experimental flicker conditions (luminance, semantic, and combined flickers) was comparable to that of the scrambled control condition. This finding fundamentally challenges existing theories of time perception that seek to explain temporal illusions, suggesting the need to revisit and reevaluate the core mechanisms underlying FITD.
JSPS KAKENHI 23K22372, 23KK0046
Talk 6, 12:00 pm
Phase-Dependent EEG Decoding of Sustained Visual Information
Michele Deodato1 (), David Melcher1; 1New York University Abu Dhabi
Vision research often emphasizes brief stimulus presentations. For example, when it comes to detecting or integrating flashed stimuli, pre-stimulus power and phase of EEG alpha oscillations (8–12 Hz) have been shown to influence neural and behavioral responses. However, in natural viewing, stimuli are often present for extended periods of time, raising the question of how visual representations are maintained and the role of neural oscillations in this maintenance. In this study, we recorded EEG from participants viewing sustained (2-second) Gabor stimuli with varying orientations (left vs. right) and spatial frequencies (low vs. high). Initial analyses, including decoding and event-related potentials, demonstrated significant neural representation for spatial frequency (but not orientation) only for the first 500–1000 ms, despite the stimuli persisting beyond this period and participant not reporting any visual fading. This raises the question of how is visual information stored and maintained into consciousness beyond this initial period. To examine oscillatory contributions to visual maintenance, we implemented a novel decoding approach targeting the 1000–2000 ms time window. Specifically, EEG decoding of stimulus spatial frequency was conducted separately for data points corresponding to different phases of alpha oscillations at each channel location. Strikingly, we found that decoding accuracy varied with the phase of alpha oscillations at frontal and occipito-parietal channels, suggesting that visual information is periodically reactivated during sustained perception. Our phase-specific decoding method underscores the potential of leveraging oscillatory dynamics to study information processing over time in the brain. These findings provide compelling evidence on the role of alpha oscillations in the maintenance of visual information, highlighting their importance in sustained visual processing.
This work was supported by the NYUAD Center for Brain and Health, funded by Tamkeen under NYU Abu Dhabi Research Institute grant CG012, Part of the work was conducted at the Brain Imaging lab within the Core Technology Platforms at NYU Abu Dhabi.
Talk 7, 12:15 pm
Testing Techniques to Discriminate the Magnocellular Division of the Visual and Auditory Thalamus.
Josiane Mukahirwa1 (), Qianli Meng, Jaeseon Song, Andrew Lisech, Keith Schneider; 1University of Delaware
Introduction. The magnocellular (M) pathway plays a vital role in the visual and auditory systems, specializing in the processing of transient stimuli. Studies such as those by DeSimone & Schneider (2019) and Meng and Schneider (2022) have investigated transient responses in the lateral geniculate nucleus (LGN) and the magnocellular division of the medial geniculate nucleus (MGN). Yet, this pathway remains challenging to isolate. This gap in understanding may be attributed to the small size of the LGN and MGN, which range from approximately 90–180 mm³, and the difficulty in generating isolating stimuli for the two pathways. In this study, we focused on the responses to transient stimuli. Methods. Nineteen participants, including individuals with (9) and without dyslexia (11), underwent fMRI scanning using a 3T scanner. Activation maps were generated for transient stimuli, with abrupt onsets and offsets, and sustained stimuli, with smooth transitions. Regions of interest (ROIs) for the LGN and MGN were manually traced on T1 and proton density images. We tested a variety of algorithms, including principal component analysis, clustering algorithms, and multivariate methods, to identify voxels showing a substantial preference for transient stimuli. Results: In the LGN in typical readers, we identified a cluster of voxels showing a preferential response to transient stimuli on the ventral edge, likely corresponding to the magnocellular layers. In the MGN, PCA analysis showed a subset of voxels with a high transient index. These specialized responses to transients were generally absent in subjects with dyslexia. Conclusion: These findings support the existence of functionally distinct M pathways in both the visual and auditory systems. This study shows the potential of transient stimuli and multivariate analysis to explore magnocellular function and its role in sensory disorders like dyslexia. Future work should explore the clinical implications for dyslexia diagnosis and interventions.