Representation of goal-modulated navigational affordances in the human brain
Poster Presentation: Tuesday, May 20, 2025, 2:45 – 6:45 pm, Pavilion
Session: Scene Perception: Neural mechanisms
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Jinkook Yu1, Soojin Park1; 1Department of Psychology, Yonsei University, Seoul Korea
Orientations of visible paths, often directly computable from visual input, play a fundamental role in guiding our navigation. The Occipital Place Area(OPA) has been shown to encode navigationally relevant features of scene environments, such as path orientation and distance. However, real-world navigation incorporates familiarity and goals beyond physical features. For example, two commuters standing at the same intersection may choose different routes to their workplaces based on their respective destinations and familiarity with the routes. This study investigated whether the OPA represents visually identical paths differently based on participants’ biased navigational experiences. In the free exploration phase, participants explored four virtual environments to familiarize with them. Each environment included intersections with different storefronts(e.g., bookstore, theater) placed at the end of the paths. In the subsequent training phase, participants were instructed to navigate toward a specified storefront. Importantly, this instruction biased the participants’ navigation goals: they were biased to direct to the left or right path within each environment more frequently (9:1 ratio), resulting in goal-driven, asymmetrical navigational experiences at the intersection. During the test phase in the fMRI scanner, participants viewed snapshots depicting the intersections of the explored environments while performing a color-dot detection task to maintain attention. Half of the snapshots depicted intersections where participants had been biased to navigate left during training, while the other half showed intersections where they had been biased to navigate right. Visually, all snapshots included both left and right path orientations and conditions were counterbalanced across participants. Results showed that the multivoxel pattern of the OPA distinguished between scenes based on the participants’ biased navigation goals, as revealed by linear SVM classification(N=16). These results suggest that the OPA encodes navigational affordances beyond those computable directly from visual path orientations and can distinguish scenes with visually identical paths based on goal-directed navigational experiences.
Acknowledgements: This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP)(NRF-2023R1A2C1006673)