Suboptimal visual cues integration in location estimation paradigm
Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Theory
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Waragon Phusuwan1,2 (), Arp-Arpa Kasemsantitham2,3, Chaipat Chunharas2,4; 1Medical Science, Faculty of Medicine, Chulalongkorn University, 2Cognitive Clinical & Computational Neuroscience Lab, King Chulalongkorn Memorial Hospital, 3Faculty of Medicine, Chulalongkorn University, 4Chula Neuroscience Center, King Chulalongkorn Memorial Hospital
Humans rely on multiple information to guide their actions and decisions in complex environments. Multisensory cue integration (e.g., using visual and proprioceptive inputs to guide action) has been extensively studied, mostly within the Bayesian information integration framework. However, little attention was given to cue integration within a single modality. This type of information integration is commonly deployed in navigation, where humans estimate goal locations and directions using visual cues such as faraway landmarks, signs, or even celestial objects like the brightest star in the sky. In this study, 9 participants played a simple computerized treasure-hunting game. For each trial, they had to estimate the treasure's location from 2 distinct visual shapes within 1 second. After the response was made or the time limit expired, the actual treasure location appeared as feedback. The locations of the two visual cues were generated by applying a horizontal shift from the randomized treasure location and adding jitter sampling from a normal distribution, with different standard deviations assigned to each shape. In this study, the two shapes (triangle and circle) were assigned with a standard deviation of 2 degrees and 8 degrees, respectively. This experimental setup allowed participants to make decisions based on the learned statistics of cues. We performed multiple linear regression with two visual cue locations as the predicate and the estimated location as the dependent variables to quantify prediction dependence on each cue location. The results showed a strong dependency of predicted locations on the reliable cue (2-degree deviation), with a coefficient of 0.88, std 0.06. However, they were weaker than the predictions from maximum likelihood estimation (MLE), with a predicted coefficient of 0.95. These findings highlight suboptimal visual cue integration and continue to raise questions about the factors influencing optimal performance in single-modality and multisensory integration.
Acknowledgements: I want to express my gratitude to Assoc. Prof. Dr. Xuexin Wei for guidance in data analyses, and Prof. Konrad Kording for the experimental design knowledge in Bayesian framework research. Lastly, I appreciate help from other CCCN lab members' contributions during the time of this study.