Exploring Facial Distinctiveness Through Deep Learning: Insights Across Image Formats and Gender

Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Face and Body Perception: Individual differences

Artem Pilzak1 (), Arda Erbayav2, Alice O'Toole3, Isabelle Boutet4; 1University of Ottawa, 2School of Psychology, 3SCOPE Lab

Introduction: Facial recognition relies heavily on distinctiveness, with atypical faces being more recognizable than typical faces (e.g., Light et al., 1979). Objectively measurable face spaces are available from deep convolutional neural networks (DCNNs), which are highly accurate at face recognition. At VSS2024, we reported that FaceNet (Schroff et al., 2015, 2018), a pre-trained DCNN, can provide a model of human ratings of distinctiveness. Here, we asked (i) whether FaceNet can effectively quantify facial distinctiveness across variations in illumination and viewing angle, and (ii) whether we can replicate findings reported at VSS2024 with another face database and across image formats. Methods: We used 64 male and 64 female identities from the FIE database (de Oliveira Junior & Thomaz, 2006). For each identity, three images were tested: half-profile; and front-profiles lighter illumination, and front-profile darker illumination. We computed distinctiveness scores for each identity and for each image format based on the total cosine distance between embeddings. Results: DCNN-derived distinctiveness scores correlated strongly across image formats (r=0.96 between half-profile and front-profile lighter illumination; r=0.88 between half-profile and front-profile darker illumination; and r=0.91 between the two front-profiles lighter vs darker illumination; all p < 0.001), indicating that DCNN embeddings robustly capture invariant aspects of face typicality. DCNN-derived distinctiveness scores averaged across formats correlated with human ratings of commonality (r=0.31, p=0.01) and typicality (r=0.27, p = 0.03) for male faces, and with human ratings of memorability (r=0.34, p<0.01) and sociability (r=0.29, p=0.02) for female faces. Conclusion: FaceNet effectively models facial distinctiveness across variable image conditions and correlates meaningfully with human-rated aspects of distinctiveness. This underscores the utility of DCNN embeddings for quantifying distinctiveness in studies of face recognition. Further analyses will explore associations between DCNN-derived distinctiveness scores and human ratings for each format tested.

Acknowledgements: Supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (2022-03998) to IB.