A platform-independent method for studying vision science in AR/VR environments
Poster Presentation: Tuesday, May 20, 2025, 8:30 am – 12:30 pm, Pavilion
Session: 3D Processing: Space, coordinate frames, virtual environments
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Zoltan Derzsi1,2, Robert Volcic1,2,3; 1New York University Abu Dhabi, 2Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, 3Center for Brain and Health, New York University Abu Dhabi
In vision science, experimental software development has been shaped by packages like Psychtoolbox or PsychoPy, which have become de-facto standards. While these open-source tools may benefit from active development communities, often only a handful of individuals bear the burden of support and must navigate changes beyond their control. The growing adoption of augmented and virtual reality (AR/VR) in vision science has introduced greater development challenges and shortened the product life cycle drastically. While computer graphics back-ends may have a life cycle of decades, similar support for virtual reality devices is often much shorter. To circumvent this problem, two approaches have been proposed in the past: back-porting game engine features to computer graphics software, and implementing psychophysics features into game engines as add-ons. As there is a massive hardware diversity in AR and VR hardware, both of these approaches lack general solutions that can be easily ported to different hardware platforms, rendering them out of date disproportionately quickly. Here, we introduce a universal, platform-independent communication framework for integrating VR hardware with the preferred software. We treat the virtual reality hardware and the game engine as a separate interactive volumetric display instead of an extension of other software packages, and we use simple and standardized communication with external devices. This approach allows researchers to create stimuli and control experiments in AR/VR while keeping the software they are already familiar with. We demonstrate the flexibility of our method by running the very same code on different platforms simultaneously in the same virtual space. Additionally, we highlight the ease of integrating well-known software with various hardware (motion trackers, game controllers and even custom electronics), while maintaining adaptability for future technologies.
Acknowledgements: We acknowledge the support of the NYUAD Center for Artificial Intelligence and Robotics and the NYUAD Center for Brain and Health, funded by Tamkeen under the NYUAD Research Institute Awards CG010 and CG012.