Visual auditory integration in simulated and native ultra low vision
Poster Presentation: Saturday, May 17, 2025, 2:45 – 6:45 pm, Pavilion
Session: Multisensory Processing: Audiovisual integration
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Yi xu1 (), Navya Sreeram1, Rob Chun1, Roksana Sadeghi2, Chau Tran3, Will Gee3, Gislin Dagnelie4, Ione Fine5,6, Arathy Kartha 1; 1Department of Biological and Vision Sciences, SUNY College of Optometry, 2University of Berkeley California, Herbert Wertheim School of Optometry, 3BaltiVirtual , 4Wilmer Eye Institute, Johns Hopkins School of Medicine, 5University of Leeds, Leeds, UK, 6University of Washington, Seattle
Individuals with normal vision (NV) integrate information from their senses close to optimally. People with ultra-low vision (ULV) traditionally undergo blind rehabilitation, where they are taught to rely on tactile and auditory cues to compensate for their vision loss. This rehabilitation might help blind individuals to integrate optimally. Alternatively, being taught to ignore vision and rely on the other senses might lead to sub-optimal integration when there is remaining useful vision. Fourteen NV wearing ULV filters (sULV; 2.05 ±0.17 logMAR) and one ULV with advanced retinitis pigmentosa (2.0 logMAR, visual field <10 degrees) completed a spatial localization task in a virtual reality environment under unimodal (V = visual, A = auditory) and bimodal (VA = visual-auditory) conditions. Two flashing and/or ringing phones appeared in sequence in different locations of the central field (-4.5 to 4.5), and participants reported whether the first was to the left or right of the second. Percent correct as a function of the stimulus separation was fit using a cumulative Gaussian distribution. The unimodal standard deviations, 𝜎, for the unimodal conditions (V and A), were used to predict optimal performance in the multimodal (VA) condition. For the sULV, mean 𝜎 was V = 1.5 ± 0.4, A = 7.6 ± 6.8, VA = 1.9 ± 0.5 with an optimal predicted VA = 1.5. There was a significant difference between predicted and empirical VA (paired t-test; t (13) = 3.3; p=0.003), showing that sULV were non-optimal in integrating visual and auditory cues. For the ULV participant, 𝜎 was V = 3.1, A = 6.1, and VA = 2.2, with an optimal predicted value of 2.7. These suggest that through years of living with her condition, the native ULV may have learned to integrate optimally, contrary to sULV, who had little time to adapt to filters.