Reconstructing moving object locations in an immersive 3D virtual environment from EEG oscillatory activity

Poster Presentation: Sunday, May 18, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Attention: Neural mechanisms

Tom Bullock1 (), Emily Machniak1, Neil Dundon1, Justin Kasowski1, You-Jin Kim1, Radha Kumaran1, Julia Ram1, Melissa Hernandez1, Stina Johansson1, Tobias Höllerer1, Barry Giesbrecht1; 1University of California, Santa Barbara

Many everyday activities require interaction with moving objects, including catching a ball or navigating a busy street. To create a cohesive mental model of the world, we select goal-relevant objects and maintain stable representations of their locations as they move through time and space. Location-selective representations of static objects are supported by EEG oscillations in the alpha and theta frequency bands, but how these representations are constructed and maintained for dynamic objects is not known. To address this issue, we recorded EEG at the scalp while participants (n=34) engaged in an immersive virtual reality (VR) task where colored spheres appeared at a distant location (30m in the VR environment) and moved towards the participant. Participants were provided a pair of virtual lightsabers and required to use one to strike a color-defined target sphere. Participants completed trial blocks where targets were either visible throughout their entire trajectory (control) or briefly disappeared mid-trajectory (500 ms) before reappearing in either a predictable or unpredictable location (1200 ms), just before the lightsaber strike (~1600 ms). Accuracy was high but dropped in the non-predictive condition (mean±SEM: .87±.01) relative to predictive (.95±.01) and control (.96±.01) conditions (p<.05). We applied inverted-encoding modeling to alpha-band activity and successfully reconstructed locations of targets throughout their trajectory starting at ~250 ms and even when stimuli disappeared mid-trajectory. Reconstructions were diminished mid-trajectory in non-predictive relative to predictive conditions. This might be interpreted as attention becoming more diffuse when it is likely that the target will appear at another location. We also reconstructed target locations from theta activity immediately before the lightsaber strike in predictive and non-predictive conditions. Together, these results provide insight into how the brain represents predictable and unpredictable goal-relevant moving objects, and validate a new framework for studying dynamic attention in immersive VR.

Acknowledgements: Funding statement: Research was sponsored by the U.S. Army Research Office and accomplished under contract W911NF-09-D-0001 for the Institute for Collaborative Biotechnologies.