Investigating the Independence of Face Shape and Motion Through Reverse Correlation

Poster Presentation: Monday, May 19, 2025, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Face and Body Perception: Parts and wholes

Raphael Tordjman1, Emily Martin2, Fabian Soto3; 1Florida International University

Face perception involves processing both static shape and dynamic motion cues, yet the degree of independence between these dimensions remains unresolved. Using reverse correlation, we investigated whether the information used for recognition of face shape and natural motion are separable across varying contextual dimensions. Dynamic, naturalistic stimuli were generated using 3d face modeling software (MakeHuman) extended with a novel model of face motion (implemented in FaReT), with Gaussian noise applied to either shape or motion parameters. Participants classified stimuli based on shape or motion in multiple contexts. Participants in shape groups classified face shape in the context of different motions (natural vs. unnatural), and participants in motion groups classified face motion in the context of different shapes (i.e., identities). We then tested whether any information used during each task was altered by the changes in context. Our findings revealed that the information used by participants to classify a motion as natural strongly depended on the face shape accompanying that information. In other words, participants changed their expectations of how a face should move depending on the face’s shape. This challenges the assumption of strict independence between these perceptual dimensions. However, information used by participants to classify faces based on their shape did not show the same strong dependence on the type of motion shown by the face. These results underscore the adaptive nature of face perception, and our novel methodology opens new avenues for the study of the context-specific nature of face perception.