Binocular Vision

Talk Session: Sunday, May 18, 2025, 2:30 – 4:30 pm, Talk Room 2

Talk 1, 2:30 pm

Real-world Statistical Regularity in Binocular Rivalry: the advantage of good exemplars

Yiwen Wang1 (), Ling Lee Chong1, Diane M. Beck1; 1University of Illinois at Urbana-Champaign

Real-world statistical regularities, unlike regularities introduced in experimental settings, are learned through extensive exposure over a lifetime in the natural visual environment. These natural patterns reflect the consistent features and structure of real-world scenes and objects. By utilizing these predictable patterns, the visual system processes statistically regular stimuli more quickly and effortlessly than less regular ones, enhancing perceptual efficiency (Beck, Center, Shao, 2024). One example of real-world statistical regularity is the distinction between good and bad exemplars images. Good exemplars, which are highly representative of a category, are more easily detected than bad exemplars. Building on this, the current study aimed to investigate whether statistical regularity influences perceptual selection in binocular rivalry, where conflicting images presented to each eye compete for the dominance of perception. In this study, participants were shown two images from the same scene category (i.e., beach, mountain, city, highway). One image was a good exemplar, and the other was a bad exemplar of that category, with each image presented to a different eye. Results revealed statistical regularity biased the perceptual selection to the good exemplar: good exemplars were more likely to be selected as the initial percept and had faster perceptual onset times compared to bad exemplars. Our results align with the predictive coding framework of binocular rivalry, where good exemplars, with higher priors, would be more likely to dominate perception over bad exemplars. These findings extend findings on statistical learning (e.g. Denison, Piazza, Silver, 2011) to real-world statistical regularities that are derived over a lifetime rather than within an experiment.

Talk 2, 2:45 pm

Is average dominance phase duration a reliable measure of multistable stimuli dynamics?

Alexander Pastukhov1,2 (), Paula Finkenauer1, Leonie Littek1, Lea Voss1, Claus-Christian Carbon1,2; 1University of Bamberg, 2EPÆG Research Group, Bamberg, Germany

When participants view stimuli compatible with multiple perceptual interpretations, their perception continuously switches. Multistable perception is often characterized by an average dominance phase duration, particularly, for studies that compare perception of different groups (patients versus healthy controls, old versus young adults, etc.). Here, we asked a question of how reliable this measure is and how variable it is across sessions compared to within session. We tested this by recruiting 31 participants over 3-5 days reporting on five bistable stimuli (2 versions of kinetic-depth effect that were identical in appearance but differed in rotation axis, Necker cube, moving plaid, and auditory streaming), three-minute blocks, twice per session in random order. The design means that for a given stimulus we can pick a pair of sessions, each two blocks long, and make a direct comparison for statistics of our choice within session (consistency of first and second block within each session A and B) versus between (consistency of first blocks between sessions A and B and same for second blocks). Using this approach, we compared block pairs within and between sessions based on average phase duration and state-dominance index using correlation, average phase differences, variance, and consistency in participant and stimuli order measures. First, we found no systematic changes over sessions. Second, we found that for most statistics variance between sessions was comparable to variance within session, so that reliability is not compromised by measuring over multiple days. Third, critically, variability of average dominance phase duration was so great even within session that it did not allow to reliably differentiate between participants or stimuli (within participant). Taken together, our results argue for caution when using average dominance phase duration as a measure of difference between participants.

Talk 3, 3:00 pm

Functional processing asymmetries between nasal and temporal hemifields during interocular conflict

Chris L.E. Paffen1 (), Surya Gayet1; 1Utrecht University

We recently reported that targets presented to the nasal (i.e., inner) visual hemifield of a single eye have a processing advantage over targets presented to the temporal (i.e., outer) hemifield during continuous flash suppression (bCFS; Sahakian et al., 2022). We speculated that this nasal advantage benefits natural vision, by prioritizing fixated objects in the nasal hemifield of one eye over (nearby) occluders in the temporal hemifield of the other. We investigated this using an interocular grouping paradigm, whereby image A was presented to the nasal hemifields (i.e., right side of the left eye, left side of the right eye), while image B was presented to the temporal hemifields (i.e., left side of the left eye, right side of the right eye). The images were either blurred or sharp, to mimic the difference in focus between near and far objects in real-world vision. Observers continuously reported perceiving image A or B (indicating interocular grouping of either the nasal or temporal hemifields), or a mixture of both (indicating perceptual dominance of one eye’s image). We found more grouping (1) for sharp than for blurred images, and (surprisingly) (2) for temporal than for nasal ones. Applying bCFS in the same participants, however, replicated the nasal advantage from our earlier work. Thus, temporal parts of an image were grouped more than nasal ones, while nasal targets broke suppression faster than temporal ones. We suggest that these discrepant results reflect distinct monocular occlusion conditions in natural viewing: we reason that a nasal hemifield advantage is adaptive when an occluder is in the temporal hemifield of a single eye (e.g., when a fixated object passes behind an occluder), while a temporal hemifield advantage is preferred when an occluder is in the nasal hemifields of both eyes (e.g., a person blocking your central view at the cinema).

Talk 4, 3:15 pm

Integrating Visual Input from Both Eyes: Binocular Retinotopic Organization based on Monocular Input

Abdalla Z Mohamed1 (), Omnia Hassanin1,2, Rania Ezzo1, Alessio Fracasso3, Jonathan A Winawer4, Bas Rokers1,4,5,6; 1New York University Abu Dhabi, Abu Dhabi, UAE, 2Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, NY, USA, 3School of Psychology and Neuroscience, University of Glasgow, Hillhead Street 62, Glasgow, G12 8QE5, Scotland, UK, 4Department of Psychology and Center for Neural Science, New York University, NY, USA, 5NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi, UAE, 6ASPIRE Precision Medicine Research Institute, Abu Dhabi, UAE

Introduction: Our brain combines visual information from both eyes to create a coherent representation of the visual world. Prior work suggests BOLD response amplitude is increased under binocular input, but population receptive field (pRF) size is unchanged. Here we further investigated pRF properties under monocular and binocular stimulation across the visual hierarchy. Methods: Fifteen healthy participants (9 males, age = 28.7 ± 10.7 years) completed nine 5-minute functional MRI (fMRI) retinotopic mapping scans. Mapping stimuli consisted of dynamic, colorful textures windowed by bar-, wedge-, or ring-shaped apertures, within a circular viewing field (7.5º radius), with participants maintaining fixation. The stimuli were presented either monocularly or binocularly using a VPixx projector and polarizing lenses. PRF models were solved using Vistasoft (https://github.com/vistalab/vistasoft). We compared pRF properties, including amplitude and size, between binocular and monocular conditions. Results: Binocular stimuli elicited greater response amplitude than monocular stimuli, but response amplitude was much smaller than predicted by linear summation of the two monocular responses. The amplitudes were 15.7 ± 8.9% greater for binocular than monocular responses in V1 (mean±std across subjects) (PFDR<0.001), 7.2 ± 6.8% in V2 (PFDR=0.004), 6.7 ± 6.1% in V3 (PFDR=0.003), 7.6 ± 7.5% in hV4 (PFDR=0.004), 9.2 ± 10.3% in TO1 (PFDR=0.004). The pRF size was also greater in V1 (29.9 ± 35.3%, PFDR=0.026) for binocular than for monocular stimuli. No significant differences in pRF size were observed between binocular and monocular stimuli in the extrastriate areas. Conclusion: The binocular responses exhibited subadditive summation, consistent with normalization by a shared pool of neurons. We speculate that the larger pRF size for binocular inputs reflects slight mismatches in the centers of the monocular receptive fields represented in a voxel. These mismatches would have the largest impact on pRF size where pRFs are small, namely V1, consistent with our observations.

This research is funded by the ASPIRE Precision Medicine Research Institute Abu Dhabi award grant number VRI-20-10. NYUAD Center for Brain and Health, funded by Tamkeen under NYU Abu Dhabi Research Institute grant CG012. NIH grants R01EY027401 and NIH R01 EY033628 supported JW.

Talk 5, 3:30 pm

The best stereoacuity may not be at the fovea

Preeti Verghese1, Ângela Gomes Tomaz, Adrien Chopin, Dennis Levi; 1Smith Kettlewell Eye Research Institue, 2UC Berkeley, 3Smith Kettlewell Eye Research Institue, 4UC Berkeley

Classic studies have shown that sensitivity to stereoscopic disparity declines with eccentricity from the fovea (Blakemore, 1970; Cummings & DeAngelis, 2001). We set out to examine whether stereoacuity was indeed highest at the fovea. We measured local stereo sensitivity in 19 controls and 8 amblyopic participants at the fovea and along the horizontal and vertical meridians at 2.5°, 5° and 10° eccentricity. Participants performed a front/back judgment on a square patch whose disparity varied adaptively. Fixation was monitored with eye tracking of the dominant eye. Results showed that the best locus for stereopsis was often not at the fovea. This was true for 13 of the 19 controls, 70% of whom had sensitivities an order of magnitude worse at the fovea compared to the peripheral locus with best sensitivity. The eccentricity of the “best” peripheral locus was: 2.5° (n=6), 5°, (n=4), and 10° (n=3), with a tendency to be in the lower visual field and for near disparities. The 6 controls whose stereo sensitivity was best at the fovea tended to have very good stereoacuity (<12 arcsec); those with best loci outside the fovea tended to have worse stereoacuity (172 ± 60 arc sec). For amblyopic participants, 2 of 3 anisometropic participants had their best stereoacuity at the fovea; the others (1 anisometropic, 3 mixed, and 2 strabismic amblyopes) showed no detectable stereopsis at the fovea, but measurable stereopsis in the periphery. Furthermore, there was a strong correlation between measures of clinical stereoacuity (Randot and Asteroid) and psychophysical stereoacuities at the best locus, whether foveal or peripheral. Taken together, our results indicate that the locus of best stereopsis is often not at the fovea and that clinical measures of stereopsis are correlated with best stereoacuity, whether this occurs at the fovea or in the periphery.

NIH R01 EY034370

Talk 6, 3:45 pm

Exocentric information influences egocentric distance estimation in perception and action

Chaeeun Lim1, Dhanraj Vishwanath2, Fulvio Domini1; 1Brown University, 2University of St Andrews

Previous research suggests that estimating the absolute distance of an object from the observer (egocentric information) and the object depth (exocentric information) relies on different cues, each associated with distinct biases and roles in guiding action. Ocular vergence is a primary cue to absolute distance in near space, while depth is specified by exocentric cues such as relative disparity and texture. Although exocentric cues alone do not specify absolute distance, we found evidence that they integrate with vergence signals, influencing absolute distance estimates. In a series of experiments involving perception and action tasks, participants viewed a stereoscopic paraboloid protruding toward them and either reached for its front or back (reaching), grasped it front to back (grasping), or compared the front and back locations to a reference 2D object (perceptual adjustment). We varied object distance (near vs. far) such that the back of the near object (Near-Back) physically aligned with the front of the far object (Far-Front). If distance perception depended solely on vergence, Near-Back and Far-Front would correctly appear equidistant. However, if depth-from-disparity, known to be overestimated in near space, is integrated with vergence-specified distance then Near-Back would be perceived as farther than Far-Front. In all reaching, grasping, and perceptual adjustment tasks, Near-Back was consistently perceived as farther than Far-Front, despite being equidistant from the observer. This discrepancy intensified when monocular depth cues were added to specify object depth. Remarkably, the discrepancy persisted even when the front and back locations were marked by a pair of isolated stereoscopic dots, suggesting that the relative disparity between the two dots was sufficient to affect their perceived location without a continuous surface. These findings demonstrate that exocentric cues affect distance perception, even when they are not directly relevant, raising intriguing questions about how the visual system selects and integrates information in a scene.

This material is based upon work supported by the National Science Foundation under Grant No.2120610.

Talk 7, 4:00 pm

Monovision-induced misperception of motion in general and presbyopic populations

Callista Dyer1, Victor Rodriguez-Lopez2, Johannes Burge1,3,4; 1Department of Psychology, University of Pennsylvania, PA, 2Institute of Optics, Spanish National Research Council, IO-CSIC, 3Neuroscience Graduate Group, University of Pennsylvania, PA, 4Bioengineering Graduate Group, University of Pennsylvania, PA

Monovision is a common prescription lens correction for presbyopia that focuses one eye at far distances and the other at near distances. A recent study showed that monovision can induce a variant of the classic Pulfrich effect: a visual illusion that causes dramatic misperceptions of the depths of moving objects. The variant—the reverse Pulfrich effect—arises because the discrepant optical lens powers, and consequent interocular differences in blur, cause an interocular processing delay: blurry images are processed milliseconds faster than sharp images. Shockingly, asynchronies of only 2ms can cause multimeter misperceptions of depth, especially when the viewed object is fast-moving (driving). However, the illusion has been demonstrated in only a small number of individuals to date. To determine the scientific generality and potential clinical significance of these findings, it is important to establish the pervasiveness of monovision-induced depth misperceptions. Here, we measure the prevalence of monovision-induced Pulfrich illusions in much larger samples of the general (n=45) and presbyopic (n=17) populations. The stimulus included two sets of horizontally moving bars. One set of bars was positioned directly above the other, and the two sets were moving in opposite directions. Interocular differences in blur, mimicking monovision corrections, were induced with trial lenses (1.0D differences) and with matched onscreen Gaussian blurring. The task was to report which set of bars appeared closer in depth. Psychometric functions were measured. Binocular disparity was the independent variable. The reverse Pulfrich effect occurs in 84% of the general population (lenses: mean=-1.49ms, SD=2.47ms; Gaussian-blur: mean=-1.27ms, SD=1.53ms) and in 100% of presbyopic subjects (lenses: mean=-1.86ms, SD=1.73; Gaussian-blur: mean=-3.64ms, SD=2.86ms). The classic Pulfrich effect, which we also measured, occurred in 89% of the general population and 94% of presbyopic subjects. The reverse Pulfrich effect is reliably induced by optical power differences smaller than the 1.5D difference typically prescribed to presbyopes.

This work was supported by the National Eye Institute and the Office of Behavioral and Social Sciences Research, National Institutes of Health Grant R01-EY028571 to J.B.