Action
Talk Session: Monday, May 19, 2025, 8:15 – 10:00 am, Talk Room 1
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 8:15 am
Decoding tool actions regardless of the observed acting body part
Kyungji Moon1 (), Florencia Martinez-Addiego1, Yuqi Liu1,2, Maximilian Riesenhuber1,3, Jody C. Culham4,5, Ella Striem-Amit1; 1Georgetown University Medical Center, 2Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Sciences and Intelligence Technology, Chinese Academy of Sciences, 3Center for Neuroengineering, Georgetown University, 4Department of Psychology, University of Western Ontario, 5Brain and Mind at Western, Western Interdisciplinary Research Building, University of Western Ontario
What are the mechanisms for understanding tool-use actions? To what extent do we generalize across various parameters, such as acting body part, in the observed actions? Here, we tested if there is a shared neural substrate for observed tool-use actions regardless of if they are performed with the hand or with the foot. We leveraged functional neuroimaging in typically-developed controls and individuals born without hands to understand whether there are shared representations for tool-use, regardless of the observed executing body part, and regardless of one’s motor experience. fMRI data from control subjects (n=18) and people born without hands (n=7) were collected while participants passively viewed complex and simple tool-use actions performed with either the hand or foot. We found shared neural responses across the observed body part for both typically-developed individuals and individuals born without hands, which suggested that observed tool-use actions are represented independently of both observed body part and personal sensorimotor experience. Specifically, univariate analyses revealed a consistent preference for action-type (simple or complex tool-use) regardless of observed executing body part in the left superior parietal lobe (SPL) for both typically-developed individuals and people born without hands. Further, multi-voxel pattern analysis successfully discriminated between observed simple and complex tool-use actions consistently across the body part in the bilateral SPL, inferior parietal lobe (IPL), and lateral occipitotemporal cortex (LOTC). Together the results suggest there are shared neural substrates for action understanding regardless of observed body part that are consistently differentiated in typically-developed individuals and people born without hands. This supports generalization across body parts in action perception and implies that motor experience is not necessary for core action understanding.
This work was supported by the Edwin H. Richard and Elisabeth Richard von Matsch Distinguished Professorship in Neurological Diseases (to E.S.A.).
Talk 2, 8:30 am
Working memory impairments contribute toward poor visually guided reaching in schizophrenia
Jose Reynoso1,2 (), Maya Glasman1, Duje Tadin1,2,3, Brian Keane1,2,3; 1Department of Brain and Cognitive Sciences, University of Rochester, 2Center for Visual Science, University of Rochester, 3Unversity of Rochester Medical Center
Common actions such as reaching for a bottle require a robust sense of proprioception and/or working memory (WM). Vision aids these movements, and improperly integrating visual signals leads to inaccuracies in reaching. In schizophrenia, it is unclear to what extent vision interacts with proprioception and WM when executing a reach, although impairments in visual perception, motor function, and visual WM have been studied. We hypothesize that patients will show significantly lower multisensory enhancement, and reduced reaching accuracy due to a memory delay. We used a virtual reality paradigm to assess reaching accuracy guided primarily by proprioception, WM, or vision. A total of 10 patients and 12 age- and sex-matched healthy controls were tasked with reaching out and tapping a virtual target with the index finger. To assess the role of proprioception, in half of the trials (randomly selected), the hand would become invisible at the start of a trial. To assess the role of WM, the target would become invisible shortly after appearing at the start of a trial, followed by a 1 second delay before initiating a reach. Groups did not differ in overall accuracy in proprioception-guided reaching or memory-guided reaching (both ps > .2). The multisensory enhancement gained by using both vision and proprioception to guide a reach was marginally worse in patients (one-tailed p = 0.095, d = 0.58). However, adding a memory delay worsened patients’ accuracy more than controls’ (one-tailed p = 0.032, d = 0.84). While data collection is still ongoing, these data suggest that impaired WM and perhaps multisensory integration contribute to poor dexterity in patients. We further showcase impairment of visual WM in schizophrenia within the context of action. Further work may also elucidate interactions between vision and proprioception (or lack thereof) and help further characterize deficits in these domains within psychotic disorders.
Talk 3, 8:45 am
Implicit adaptation’s effect on motor awareness and confidence
Marissa H. Evans1, Jordan A. Taylor2, Michael S. Landy1; 1New York University, 2Princeton University
Sensorimotor adaptation is essential for maintaining movement accuracy; it seeks to minimize the effects of external perturbations. It can occur explicitly, by adjusting the intended motor plan to overcome task errors, or implicitly, by automatically and incrementally calibrating the sensorimotor mapping (while the motor plan remains stable). While explicit adaptation has been shown to reduce sensorimotor confidence (confidence in the success of a motor action with a sensory-directed goal), it remains unknown if the operation of implicit adaptation also affects confidence in one’s motor awareness. Participants made a slicing reach through a visual target with an unseen hand. We provided “error-clamped feedback”: visual feedback of radial hand position moved forward with the hand but went in a fixed direction independent of hand position, which participants were instructed to ignore. Error-clamp direction varied over trials (square wave, amplitude +/-6 deg, 12 cycles/session, plus zero-mean noise per trial, range +/- 6 deg). They indicated perceived hand direction and reported confidence by adjusting the length of an arc centered on the indicated endpoint direction, with larger arcs reflecting lower confidence. Points were awarded if the arc encompassed the true reach direction; fewer points for larger arcs. This incentivized attentive confidence report and minimizing direction-report error. A leaderboard was presented every 50 trials. No other feedback was provided. A significant 12 cycle/block Fourier component in reach direction provides strong evidence for implicit adaptation from the error clamp. The same frequency component was also obtained for indication of reach direction. Thus, while adaptation is unconscious, a mismatch is created between the motor plan and proprioceptive signal, resulting in a judgment of an unsuccessful reach. However, confidence judgments of motor awareness were not affected by adaptation, indicating that sensorimotor confidence and confidence in one’s own proprioceptive measurements are calculated differently.
Funding: NIH EY08266
Talk 4, 9:00 am
Leveraging pupil diameter to track explicit control processes in visuomotor adaptation
Sean R O'Bryan1, Joo-Hyun Song1; 1Brown University
Visuomotor adaptation (VMA) enables us to recalibrate our sensorimotor mappings to overcome unexpected perturbations. While VMA has been traditionally characterized as an implicit process driven by sensory prediction error, recent work suggests that explicit cognitive control (e.g., working memory) is involved in many VMA tasks, where learning is achieved through a dynamic interplay of implicit adaptation and effortful, explicit strategies to minimize error. However, current approaches to measure explicit VMA (e.g., reported aiming direction) are cumbersome. As an alternative, we predicted that task-evoked pupil diameter (PD) could provide an index of explicit control during VMA. Across three experiments, participants reached to visual targets while the direction of a cursor was unexpectedly rotated 45° relative to the hand or target. To dissociate explicit and implicit learning, we provided continuous (mixed; N = 30), delayed (explicit; N = 28), or error-clamped feedback (implicit; N = 30). For both mixed and explicit tasks, we found that PD rapidly increased in response to perturbations, consistent with the expectation that PD may track effortful, explicit processes recruited to reduce high initial error. Moreover, PD was significantly associated with individual differences in adaptation, where high performers exhibited larger task-evoked responses. In contrast, for the implicit task—which yields large sensory prediction errors that cannot be controlled—PD was insensitive to both the onset of the perturbation and to individual differences in adaptation. Collectively, our results point to PD as a promising tool to study the interplay of explicit and implicit learning mechanisms in VMA.
This work was supported by NSF BCS-2043328
Talk 5, 9:15 am
The effects of sensorimotor uncertainty during natural locomotion in early childhood
Sara E Schroer1, Mary M Hayhoe1; 1University of Texas at Austin
While walking, vision is used to efficiently gather information to navigate, avoid obstacles, maintain balance, and find footholds. Adults flexibly adapt gaze allocation and gait when walking on challenging terrains, suggesting a high-level control of walking that accounts for visual information, energetic costs, and stability. Although locomotion develops through early childhood, how children use vision to control their walking has been minimally studied. Children 2- to 6-years-old and adults wore a head-mounted eye tracker while they walked on two natural terrains – a sidewalk and loose stones. Motor uncertainty is higher on the stones because each potential foothold is wobbly, increasing motor noise. In response to the increased uncertainty, walkers modified their behavior in three ways: more fixations were directed towards the ground, gaze was closer to their body, and speed decreased. On the sidewalk, an average of 61.2% of fixations were towards the ground – and this increased to 90.5% on the stones. Participants looked further ahead on the ground when walking on the sidewalk (ps<0.001; distance measured as participant’s own leg length). Adults looked 3 leg lengths ahead when walking on the sidewalk (approximately 3 steps ahead) and only 2 leg lengths ahead on the stones. By 6-years-old, children’s attention on the sidewalk is similar to adults (3 leg lengths ahead), but younger children looked further ahead on the ground (3-4 leg lengths, ps<0.014). Younger children’s lookahead distance was also more broadly distributed, whereas older children’s and adults’ gaze peaked at 3 leg lengths ahead. On the stones, children of all ages look closer to their body (and closer than adults), just 1-2 leg lengths ahead. Lastly, children walked faster on the sidewalk than stones (p=0.027). Our work suggests that locomotion in young children, like adults, is controlled by complex decision mechanisms that take sensorimotor uncertainty into account.
NIH R01 EY05729 and T32 EY021462
Talk 6, 9:30 am
Age-Related Differences in Gaze Distribution During Locomotion: Balancing Safety and Exploration in Real and Virtual Environments
Sophie Meissner1 (), Jochen Miksch2, Sascha Feder3, Sabine Grimm2, Wolfgang Einhäuser2, Jutta Billino1; 1Justus Liebig University Giessen, Experimental Psychology, 2Chemnitz University of Technology, Physics of Cognition Group, 3Chemnitz University of Technology, Cognitive Systems Lab
During locomotion, attention guidance must maintain a balance between gathering relevant information from the environment and ensuring stable gait. Gaze allocation is key for this balance of exploration and safety. In particular for older adults, avoiding falls plays a critical role during locomotion. Prioritization of gaze towards the ground has been proposed as a compensatory strategy in older adults to maintain gait stability. In this study, we investigated age-related differences in gaze allocation during locomotion with and without an additional task. We considered locomotion in a real environment and in virtual reality (VR) to evaluate comparability of behavior in both settings. We studied locomotion in younger (N=24, M=26.1 years) and older (N=24, M=68.8 years) adults. Participants traversed a real hallway and a highly realistic virtual version of the same hallway, with or without the additional search task to locate and manipulate small target objects on the wall. Gaze behavior was assessed using mobile eye-tracking glasses and a VR headset, respectively. Our findings show a strong age-related bias in gaze allocation towards the floor that holds in the real world as well as in VR. Older adults focus on gait-related information, putatively stabilizing their postural control, at the cost of exploring their environment. However, when an explicit second task is introduced that requires allocation of attention away from the floor, older adults adapt their gaze behavior so that their gaze allocation patterns appear similar to those of younger adults, potentially deprioritizing safety in favor of exploration. We conclude that task demands during locomotion might critically put gait stability of older adults at risk. Gaze allocation seems similarly modulated in the real and the virtual world, supporting that virtual approaches provide an appropriate proxy to investigate real-world locomotion behavior across the lifespan.
This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), Collaborative Research Centre SFB/TRR 135: Cardinal Mechanisms of Perception, project number 222641018