As the head turns: decoding head orientation in dynamic contexts

Poster Presentation: Tuesday, May 20, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Temporal Processing: Neural mechanisms, models

Sebastian Montesinos1, Lina Teichmann2, Shruti Japee3, Chris Baker4; 1National Institute of Mental Health

Studies of visual perception frequently use rapidly presented static images, even though natural visual input is dynamic. The current study investigates the extent to which representations evoked during the presentation of static stimuli generalize to dynamic movies that involve those same stimuli. To address this question, we used magnetoencephalography (MEG) to compare the time course of brain activity as participants viewed static images and dynamic movies of human faces. The static images depicted faces with varying head orientations, while the movies showed these faces transitioning between orientations, passing through frames presented during the static image trials. We used time-resolved multivariate analysis approaches (i.e., classification and regression) to compare MEG signal patterns evoked by the different face orientations in both contexts. Results from both analysis approaches indicate that head orientation information can be reliably detected during static image viewing 100 ms after stimulus onset, with peak performance of the models occurring around 120–140 ms. Training models on MEG data evoked by static trials and testing them on MEG data evoked by movie trials, we found that head orientation information generalizes from static images to movies. However, we observed a temporal asynchrony between these trial types, with models trained on later parts of the static trials (300-400ms) best generalizing to movie trials. Together, these findings illuminate the similarities and differences in the temporal dynamics of processing across static and dynamic contexts, and allow for testing of predictive processing models of perception.