Arbitrary sounds facilitate visual search for congruent objects
53.556, Tuesday, May 14, 8:30 am - 12:30 pm, Vista Ballroom
L. Jacob Zweig1, Satoru Suzuki2, Marcia Grabowecky3; 1Department of Psychology, Northwestern University, 2Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University, 3Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University
Multisensory correspondences are thought to form through Hebbian-type learning, whereby repeated exposure to coincident auditory and visual signals promotes the formation of auditory-visual associations. Previous research has demonstrated that feature-based multisensory correspondences influence perceptual processing (Iordanescu et al., 2010). For example, when searching for keys among clutter, participants are faster to detect the spatial location of the keys when the characteristic sound of keys jingling is played at the onset of search. It remains unclear, however, whether such facilitation in perceptual processing relies on natural correspondences or whether newly learned associations might similarly facilitate processing. In the present study, we demonstrate that learned arbitrary sounds facilitate visual processing of an associated object, despite the absence of spatial information in the auditory signal. We trained participants to associate two pairs of visual stimuli each with a single arbitrary sound. Visual stimuli consisted of kaleidoscope images that varied in spatial frequency, shape, size, and color. Learning was verified with a two-alternative task in which participants had to correctly match a presented sound to the associated kaleidoscope image. Following training, participants performed a visual search for target kaleidoscopes from one of the previously learned sets with simultaneous presentation of target-congruent or target-incongruent sounds. We demonstrated that a non-spatially informative target-congruent sound speeds visual search to the associated target object. Accuracy did not significantly differ across the two sound congruency conditions and there was no evidence of a speed-accuracy trade-off. These results suggest audiovisual integration may facilitate visual processing and detection by increasing the salience of target objects.