Eye Movements: Natural tasks, neural mechanisms
Talk Session: Friday, May 16, 2025, 3:30 – 4:45 pm, Talk Room 1
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 3:30 pm
Consequences of temporal modulations on foveal vision
Ruitao Lin1, Alessandro Benedetto1,2, Michele Rucci1; 1University of Rochester, 2University of Florence
Previous research has shown that temporally modulating the stimulus improves acuity in the visual periphery. Here we investigate whether temporal modulation can also be used to enhance visual acuity in the foveola. In a forced-choice task, human observers (N=7) judged the orientation of Snellen E optotypes embedded in a 1/f noise background at 0-degree or 7.5-degree eccentricities. The stimulus luminance either remained constant or was temporally modulated in a square-wave manner at 3, 6, or 9 Hz. As expected, visual acuity improved in the periphery when the stimulus was modulated at low temporal frequencies (3 Hz), yielding significantly higher performance than during exposure to a non-modulated stimulus. In contrast, no improvement was observed in the foveola irrespective of the modulation frequency. A possible explanation of this result is that the foveola is particularly sensitive to the luminance modulations introduced by ocular drifts, the persistent fixational eye movements that continually modulate visual input signals. To test this hypothesis, we repeated the experiment with stimuli in the foveola using a custom apparatus to counteract the consequences of eye movements and maintain the stimulus immobile on the retina. We compared performance between optotypes of fixed luminance and temporally modulated at 5Hz. Notably, temporal modulation of stimulus luminance improved acuity under retinal stabilization, even though no improvement was visible during normal viewing. Our results indicate that normal fixational eye movements generate spatiotemporal signals that in the foveola are sufficient for discriminating high acuity stimuli. These results suggest ways for enhancing vision in observers with abnormal fixational motion.
This work was supported by National Institutes of Health grants EY018363, P30 EY001319, and University of Florence (Progetti competitivi 2025-2026).
Talk 2, 3:45 pm
Determining wavelength-in-focus for polychromatic visual stimuli
Benjamin M Chin1, Martin S Banks1, Derek Nankivil2, Austin Roorda1, Emily A Cooper1; 1University of California, Berkeley, 2Johnson & Johnson Vision Care
Visual stimuli encountered in the natural environment are typically polychromatic, comprising a combination of visible wavelengths. But the human eye can only bring one wavelength into focus at a time, due to chromatic aberration in its optics. Models of human vision often assume that the eye focuses on light near the peak of the luminosity function, ~555 nm. But in reality, the wavelength-in-focus likely varies depending on the stimulus (Finch et al., 2024). The goal of the present study was to identify the wavelength that the human eye brings into focus for stimuli that vary in color. First, we measured the magnitude of each participants’ (n=9) longitudinal chromatic aberration (LCA). This was accomplished via a psychophysical task in which participants indicated the orientation of a Gabor flashed briefly at one of nine virtual distances on an OLED display. The distance at which peak performance occurred for different colors was used to constrain a model of LCA by Thibos et al. (1992). Next, we measured the point spread functions (PSFs) of the participants with a Shack-Hartmann wavefront sensor recording at 30Hz as they focused on three-letter words for three seconds. Stimuli varied in their relative proportions of long, middle, and short wavelengths. Combined with the individual LCA curves, we could then determine the wavelength that was in best focus for each stimulus. We defined best focus as the absence of defocus aberration in the wavefront for a point source on the stimulus. Across stimuli of different colors, the wavelength-in-focus varied notably: it shifted towards longer wavelengths when the stimulus had more long wavelengths, and vice versa for shorter wavelengths. Preliminary modeling suggests this behavior could be driven by a cone-opponent mechanism with negative weights on S- or M-cones and positive weights on L-cones.
This work was supported by the NSF (#2041726) and the NIH (T35EY007139, K99EY036497).
Talk 3, 4:00 pm
The optokinetic response in fruit flies is tuned to typical visual consequences of self-motion
Lisa M. Kroell1 (), Etienne Serbe-Kamp1, Lisa M. Fenk1, Martin Rolfs2; 1Max Planck Institute for Biological Intelligence, 2Humboldt-Universität zu Berlin
Recent evidence shows that Drosophila move the retinas below their rigid compound eyes to smoothly track and thereby stabilize visual image shifts (Fenk et al., 2022). Here, we suggest that this optokinetic response is tuned to the idiosyncratic visual consequences of self-motion. Male flies were reared in darkness and, at 8–10 days of age, exposed to 40 minutes of visual stimulation in a closed- or open-loop environment. In both conditions, head-fixed flies walked on a floating ball while viewing a vertical square-wave grating. In the closed-loop setting, a rotation of the ball along the yaw-axis produced a horizontal translation of the grating on screen (at gains of 0.8 or 6), simulating retinal image shifts during natural locomotion. In the open-loop condition, we presented a replay of stimulus translations produced by flies in the closed-loop setting. We thus obtained pairs of flies that had been exposed to identical temporal frequency (TF) information throughout their lifespan, yet only half of them had generated the underlying image shifts through active self-motion. To measure the temporal tuning of the optokinetic response, we moved a vertical grating horizontally across the screen at 0.4–10 Hz. We simultaneously recorded the position of the deep pseudopupil, a virtual image on the fly retina, with video-based infrared tracking. Across both gain conditions, flies that had actively produced a certain TF range during the exposure phase now followed these frequencies more readily with their retinas than their open-loop counterparts. Moreover, while dark-reared flies initially executed slower retinal movements than light-reared conspecifics, the closed-loop, high-gain exposure condition raised retinal movement velocities significantly above the light-reared baseline. Our findings suggest a surprising plasticity of the fruit fly optokinetic response: To stabilize image shifts during active locomotion with maximum efficacy, retinal movements preferentially follow visual transients that match the perceptual consequences of self-motion.
This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. [865715 – VIS-A-VIS] to MR).
Talk 4, 4:15 pm
Temporal Dynamics of Oculomotor and Perceptual Adaptation in Response to Visual-Vestibular Conflict
Phillip Guan1 (), Zhetuo Zhao1,3, Xiuyun Wu2, T Scott Murdison2; 1Reality Labs Research, Meta, 2Reality Labs, Meta, 3University of Rochester
The consistency between vestibular signals and retinal image motion during head movement is crucial for both the vestibulo-ocular reflex (VOR) and the perception of visual stability. Near-eye optics in prescription eyewear and head-mounted displays (HMDs) can introduce optical distortions to visual input while vestibular signals remain unaffected, leading to visual-vestibular conflict (VVC). This conflict may negatively affect the efficacy of the default VOR response, potentially leading to perceptual errors in visual stability and triggering adaptations in VOR response and visual perception. These two adaptation processes have mostly been studied separately, and the relationship between them—whether one determines the other or they operate as distinct mechanisms—remains unclear. In this work, we characterize perceptual and VOR gain adaptations to five patterns of VVC and disentangle their respective contributions to motor and perceptual changes. Our study is conducted using a custom-built system to facilitate repeatable VOR head motions, accurate head and gaze tracking, and distortion-free, wide field-of-view stimulus presentation. Our results suggest that the perceptual changes in visual stability results from a combination of both motor and perceptual adaptation. We observe both VOR adaptation to minimize demands for smooth pursuit and visual-vestibular recalibration driven by the discrepancies between empirical and expected retinal image motion. Furthermore, we map the evolution of these changes over time (at one minute intervals over nine minutes of adaptation), and we find that VOR adaptation is most pronounced when motor adaptation demands align with retinal motion errors, which also leads to greater shifts in visual stability judgement.
Talk 5, 4:30 pm
Differential Effects of Peripheral and Central Vision Loss on Scene Perception and Eye Movement Patterns
Byron A. Johnson1 (), Michael Beyeler1, Miguel P. Eckstein1; 1University of California, Santa Barbara
Peripheral (PVL) and central vision loss (CVL) are irreversible visual impairments that significantly affect visual tasks like search and reading. Studies have shown that patients with vision loss exhibit reduced visual search accuracies and reading speeds. While previous studies have documented reduced performance in these domains, less is known about how PVL and CVL impact the perception of natural scenes and social cues. To investigate, we tested 32 sighted observers using a gaze-contingent simulation for PVL, CVL, or no impairment. Participants viewed 120 natural scenes (half depicting social interactions and half from the MS COCO dataset) and generated descriptions after one or three saccades. PVL was simulated with a 10-degree clear window surrounded by a Gaussian blur, while CVL applied a 10-degree Gaussian blur to the center of fixation. Eye movement data was analyzed by determining the correlation between fixation heat maps for each viewing condition and scene. Description quality for each scene was rated for semantic similarity to gold-standard descriptions. Results revealed a significant three-way interaction between viewing condition, scene type, and saccade count (F=3.978, p=.018). Interestingly, when viewing social interaction scenes with one saccade, descriptions generated with PVL were rated lower than CVL (p=.0001, Cohen’s D = .3045) and no impairment (p < .0001, Cohen’s D = .5206). Fixation heat map correlations between CVL and no impairment were lowest across scene types and saccades (F=52.907, p < .0001), suggesting greater changes in fixation patterns compared to no impairment (p < .0001, Cohen’s D = 1.29) and PVL (p < .0001, Cohen’s D = 1.02). These findings suggest distinct effects underlying scene perception in PVL and CVL: PVL reduces semantic understanding of scenes, while CVL alters gaze behavior. This work underscores the need for tailored interventions based on impairment type to improve daily functioning for individuals with vision loss.