Spatiotemporal task parameters modulate multisensory response enhancement in saccadic latency

Poster Presentation: Sunday, May 18, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Eye Movements: Saccades, remapping

Jessica Chalissery1,2 (), Anthony Herdman1,2,3,4, Miriam Spering1,2,3,4,5, Philipp Kreyenmeier1,2; 1University of British Columbia, 2Graduate Program in Neuroscience, University of British Columbia, 3School of Audiology, University of British Columbia, 4Djavad Mowafaghian Center for Brain Health, University of British Columbia, 5Institute for Computing, Information, and Cognitive Systems, University of British Columbia

Saccadic latencies are typically faster in response to multisensory as compared to unisensory stimuli. However, studies using different saccadic protocols result in varying degrees of multisensory response enhancement. This study investigates which spatial and temporal task parameter configuration yields the greatest multisensory response enhancement for saccadic latency. Human observers (n=9) made saccades to either visual targets (small black dot presented 10 degrees to the left or right from the screen center), auditory targets (burst of white noise played to the left or right ear through headphones), or combined, congruent audiovisual targets. We measured saccadic latency with the Eyelink1000 eye-tracker as indicators of sensory processing speed and to determine the efficacy of multisensory integration across three task configurations: (1) gap & placeholders, (2) no-gap & no-placeholders, and (3) no-gap & placeholders. As expected, a gap reduced saccadic latency to visual targets, whereas placeholders sped up saccades to auditory targets. Across all task configurations, we observed faster latencies to audiovisual targets compared to visual or auditory targets, indicating multisensory response enhancement (all p<0.001). To assess whether this enhancement was explained by statistical facilitation of the redundant audiovisual signal, we compared observers’ audiovisual latencies to the upper bound of a race model based on the unisensory target conditions. Task configurations strongly modulated the degree to which audiovisual latencies outperformed the race model (F(2,16)=6.42, p<0.01) and showed that only the no-gap & placeholder-condition resulted in reliable and strong multisensory integration. Our findings demonstrate that the spatiotemporal parameters of a simple saccade task modulate the magnitude of multisensory enhancement on saccadic latency. These findings provide a reliable foundation to allow exploration of the underlying mechanisms of multisensory perception and orienting behavior.