Individual differences in eye movements to faces are stable but malleable

Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Face and Body Perception: Individual differences

William G. Hayward1, Nianzeng Zhong1; 1Lingnan University

Although all faces have a highly similar spatial structure, previous studies have found individual differences in eye-movement patterns, showing that some people prefer to scan the upper region of faces (e.g. the eyes) whereas others prefer to look at the lower region of faces (e.g. the nose or mouth). An unresolved question is whether these different fixation patterns are sensitive to the information that is encoded about a face. In this study, we explored whether the facial information available for encoding would affect idiosyncratic eye movement patterns. Specifically, two groups of participants (an upper-focused group and a lower-focused group) performed two learning/recognition tasks. In one task they learned intact faces and in the other task they learned scrambled faces; in both tasks they were then given an old/new test for the studied faces but always in the intact format. We expected that participants would primarily fixate on the eyes of scrambled faces during the study phase, and we predicted that this would lead to a more upper-focused eye-movement strategy for intact faces at test. We found that following both study tasks, the upper-focused group continued to show a more upper-focused pattern than the lower-focused group, suggesting that individuals’ looking preferences in the tasks are relatively consistent. In addition, both upper- and lower-focused groups used a more upper-focused pattern when they were learning scrambled faces then learning intact faces, suggesting that these stable idiosyncratic fixation patterns were nonetheless sensitive to encoded information about a face. Taken as a whole, these results show that an observer’s fixation patterns when viewing a face are the product of an interplay between stable individual differences and context-specific task optimization.

Acknowledgements: This work was supported by a grant from the Hong Kong Research Grants Council (LU13605523) to William G. Hayward.