Mapping the Idiosyncratic Recognition of Facial Expressions of Emotion

Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Face and Body Perception: Emotion

Anita Paparelli1 (), Lisa Stacchi1, Inês Mares2, Louise Ewing3, Marie L. Smith4, Roberto Caldara1; 1Eye and Brain Mapping Lab, Department of Psychology, Univerisity of Fribourg, Fribourg, Switzerland, 2William James Center for Research, Ispa – Instituto Universitário, Lisboa, Portugal, 3School of Psychology, University of East Anglia, Norwich, UK, 4School of Psychological Sciences, Birkbeck College, University of London, UK

The recognition of facial expressions of emotion (FER) is a critical biological skill of human social cognition. Recently, data-driven work from our laboratory revealed that Western adults sample facial information through idiosyncratic fixation patterns during FER, while maintaining comparable categorization accuracy (Paparelli et al., 2024). Importantly, we also reported that the same observer consistently adopts the same sampling strategy independently of the facial expressions of emotion (FEE). However, the understanding of the underlying factors behind idiosyncratic fixation patterns are yet poorly understood. The current study seeks to provide further insights to this important issue by assessing whether the individual differences observed in sampling strategies during FER relate to differences in information use. To probe this hypothesis, we tested healthy adult Western observers on a FEE-categorization task using first an eye-tracking (ET) and subsequently a Bubbles reverse correlation (RC) paradigm. Both experiments investigated the 6 basic FEEs, in addition to the neutral condition. In the ET paradigm, observers freely viewed images while their eye-movements were recorded. On the other hand, during the Bubbles paradigm, stimuli were presented through masks that revealed different facial features across trials. Our results replicate previous findings and show robust idiosyncratic fixation patterns with minimal FEE-dependency. In contrast, the results from the Bubbles RC paradigm highlight that information use is mostly FEE-specific, while showing less variation across observers. By investigating for the first time information fixation and use at the single subject level, our data show that while individuals sample facial information during FER differently, they use the same facial features to categorize FEEs. This indicates that FER is achieved through a complex interplay between foveal and parafoveal visual signals integration across fixations. The recognition of FEE can be efficiently achieved by distinct biological idiosyncratic tunings.

Acknowledgements: AP was supported by the Swiss National Foundation grant n° P000PS_227296 / 1. RC was supported by theSwiss National Foundation grant n° 10001C_201145 / 1.