Development
Talk Session: Sunday, May 18, 2025, 8:15 – 10:00 am, Talk Room 2
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 8:15 am
Objectively Measuring Sight Rescue in Severely Vision-Impaired Young Children Following Gene Therapy
Marc Pabst1,2,3, Kim Stäubli1,2,3, Yannik Laich3, Roni Maimon-Mor1,2,3, Steven Scholte4, Peter Jones5, Michel Michaelides1,3, James Bainbridge1,3, Tessa Dekker1,2,3; 1Institute of Ophthalmology, University College London, 2Experimental Psychology, University College London, 3Moorfields Eye Hospital NHS Foundation Trust, 4Faculty of Social and Behavioural Sciences, University of Amsterdam, 5Department of Optometry and Visual Sciences, City St George's, University of London
Recent breakthroughs in ocular gene therapy hold significant promise for treating inherited retinal diseases (IRDs), the most prevalent cause of blindness in children and young people. IRDs compromise the retina's structure and function, with severe forms leading to complete loss of light sensitivity in early childhood. However, significant challenges remain in objectively characterising the effects of new therapies, particularly for very young children. Vision-impaired toddlers and children typically struggle to keep their gaze focused on visual stimuli or provide consistent responses, and traditional assessments typically are heavily reliant on subjective evaluations by highly trained clinical specialists. To develop objective measures of therapeutic effects complementing existing assessment, we used child-friendly neuroimaging and gamified testing approaches. Using steady-state visual evoked potentials (ssVEP) measured with EEG, we non-invasively recorded cortical responses to flickering sinusoidal gratings with varying spatial frequencies. To ensure engagement we integrated gratings within personalised age-appropriate videos. Additionally, we used a novel reaching-behaviour test embedded in a child-friendly iPad game that involved searching and tapping moving Gabor patches of varying spatial frequencies. These protocols were applied to eight young children (ages 3–6) diagnosed with a severe form of Leber Congenital Amaurosis (LCA) who had received novel gene therapy. Visual function was assessed either by comparing treated and untreated eyes or by conducting pre- and post-treatment evaluations, depending on the patient. Our tasks revealed substantially stronger visual cortex responses and better behavioural task performance for treated than untreated eyes. The improvements observed were remarkable, especially when considering the typical progression of the disease and the benefits of available ocular gene therapies. This is likely facilitated by the exceptionally early time of intervention, minimising retinal degeneration and maximising neural plasticity. Establishing sensitive, child-friendly, and objective measures for evaluating early treatment effects is critical for advancing the field of ocular therapy.
Funding was partially provided through the NIHR Moorfields Biomedical Research Centre.
Talk 2, 8:30 am
Cortical feedforward-recurrent circuit alignment matures following experience
Augusto Abel Lempel1, Sigrid Trägenap2, Clara Tepohl, Matthias Kaschube2, David Fitzpatrick1; 1Max Planck Florida Institute for Neuroscience, 2Frankfurt Institute for Advanced Studies
Sensory cortical areas guide behavior by transforming stimulus-driven inputs into selective responses representing relevant features. A classic example is the representation of edge orientations in the visual cortex, which displays a functional connectivity alignment where layer 4 (L4) neurons co-activated by an orientation provide feedforward inputs to specific functional modules in layer 2/3 (L2/3) that share strong recurrent connections. Such aligned state of feedforward-recurrent interactions is critical for amplifying selective cortical responses, but how it develops remains unclear. Using simultaneous electrophysiology and calcium imaging, we probed the trial-to-trial correlation (coactivity) between single-unit spiking responses to oriented gratings in L4 and L2/3 with millimeter-scale modular responses in the L2/3 network before and after experience. We then compared each unit’s orientation preference with that of coactive L2/3 modules. In experienced animals, units in both layers display coactivity with modules matching their preferred orientation. In naïve animals, despite high trial-to-trial response variability, L2/3 units are coactive with modules displaying similar orientation preference, consistent with a well-structured recurrent network. In contrast, L4 units are coactive with L2/3 modules corresponding poorly with their orientation preference, and this is consistent with a poor alignment between feedforward inputs from L4 elicited by oriented stimuli and activity patterns amplified by L2/3 recurrent interactions. One factor that could contribute such lack of functionally specific coactivity is high variability in naïve L4 neuron responses that decreases significantly following experience, but a computational model of feedforward-recurrent interactions suggests that high variability alone is insufficient to explain the naïve state. This model also provides a biological signature of misalignment that we have confirmed with in-vivo whole-cell recordings: dynamic changes in the orientation preference of L2/3 subthreshold responses. In conclusion, we provide diverse evidence for a realignment of feedforward-recurrent interactions following experience that is critical for building reliable and temporally consistent sensory representations.
National eye institute funding: K99EY034936-01A, 2R01EY011488
Talk 3, 8:45 am
The developmental visual input across environments
Philip McAdams1, Alexis Colwell1, Linda B. Smith1; 1Indiana University, 2Indiana University, 3Indiana University
Visual development is experience dependent; however, little is known about the low-level visual experience of infants. One might assume that at the scale of daily life, the statistics of visual experience are similar for all perceivers. However, the visual input depends on the perceivers’ perspective, looking behavior, and their environment. For example, younger and older infants show different looking biases and have different egocentric perspectives and motor behaviors, and perceivers live in different environments with different visual properties. Using head-mounted cameras, we captured 100,000 egocentric images from the daily life experiences of infants aged 1-3, 6-8, and 10-12 months, from a small town in the USA (N=24) and from a dense urban fishing village in India (N=24). We extracted a range of spatial image statistics relating to early vision and complexity, e.g. edge density, edge orientations and their predictive relations, and fractal dimension, to characterize and compare infants' visual input across development and environment. Overall, the youngest infants’ visual input, across locations, was characterized by greater simplicity and predictive edge patterns than older infants, with both locations showing similar developmental trends. However, complexity and predictive properties among edges differed across locations. For example, US infants’ input had overall lower fractal complexity than Indian infants, unrelated to amount of time spent outdoors. By 10-12 months, fractal complexity decreased for Indian infants but increased for US infants. Our results suggest that young infants’ visual systems are biased toward simplicity, and that as the visual and motor systems develop, infants can select their own egocentric perspectives to create a curriculum for learning. Our findings are consistent with developmental changes in looking biases found in laboratory studies, showing these same changing biases at the scale of everyday input. The cross-environmental differences suggest both universal and context dependent regularities in the visual input.
Talk 4, 9:00 am
Visual shape processing for action does not depend on visual experience: Evidence from late-sighted children
Shlomit Ben-Ami1,2,3 (), Roy Mukamel2, Sana Khan7, Chetan Ralekar1, Mrinalini Yadav4, Pragya Shah4, Suma Ganesh5, Priti Gupta4, Abhinav Gandhi6, Pawan Sinha1; 1MIT Department of Brain and Cognitive Sciences, MA, USA, 2Sagol School of Neuroscience, School of Psychological Sciences, Tel-Aviv University, Israel, 3Minducate science of learning research and innovation center, Tel-Aviv, Israel, 4The Project Prakash Center, Delhi, India, 5Department of Ophthalmology, Dr. Shroff's Charity Eye Hospital, Delhi, India, 6Worcester Polytechnic Institute, MA, USA, 7Wellesley College, MA, USA
Does our capacity to process the visual properties of an object and manipulate it accordingly depend on visual experience? Efficient object interaction requires visual estimation of to-be-grasped object properties such as size, weight, and shape to guide pre-programming of grip aperture, force, and positioning. While prior visual experience is widely assumed to be critical for such estimates, empirical evidence remains sparse. We investigated this assumption in visual shape processing among 14 late-sighted children who were born with bilateral dense cataracts and gained vision only after surgery in late-childhood (provided by Project Prakash, India). Patients were tested pre- and post-surgery, alongside 10 sighted controls assessed under normal vision and acuity-matched simulated visual loss. Participants performed a visually guided pincer grasping task (vision-for-action) and a delayed-match-to-sample visual discrimination task (vision-for-perception), as well as visual acuity testing. Grip efficiency, measured by object stability scores based on grasp positioning, was impaired in patients relative to normally sighted controls but comparable to acuity-matched controls as early as the first post-surgery assessment. Conversely, visual discrimination accuracy remained suboptimal even years after surgery, beyond limitations explained by visual acuity reduction. These results reveal a developmental dissociation of visual shape processing for action and perception. Action-oriented visual shape processing depends on acuity but not on early-life or post-surgical visual exposure, whereas shape perception requires early-life visual experience. This distinction aligns with theories and evidence differentiating dorsal (action-oriented) and ventral (perception-oriented) visual streams. Unlike previous findings on size and weight processing, where grasp performance did not recover after cataract-removal, but perception was preserved, these results suggest unique mechanisms for shape processing. Our findings underscore the developmental trajectories and limitations of late-acquired vision, highlighting implications for tailored rehabilitation strategies. Further studies on processing specific object properties (e.g., size, weight, shape) will be essential to unravel distinct developmental pathways and mechanisms.
(1) NEI (NIH) grant R01 EY020517 to PS, (2) Global seed funding from the Broshy Brain and Cognitive Sciences Fund for MIT-Israel collaborative studies. (3) Minducate Science of Learning Research and Innovation Center, Tel Aviv University.
Talk 5, 9:15 am
Early development of navigationally relevant location information in retrosplenial complex
Yaelan Jung1 (), Daniel Dilks1; 1Emory University
Representing the locations of places is critical for finding our way from a specific place to some distant, out-of-sight place (e.g., from our house to our favorite restaurant in another part of town) – a process referred to as map-based navigation. Neuroimaging work in adults reveals that this ability involves the retrosplenial complex (RSC) – a scene-selective region in the medial parietal cortex. Despite understanding the neural basis of map-based navigation in adults, however, nothing is known about how this system develops. So, does map-based navigation only develop after a decade or more of experience, as generally assumed? Or is it, perhaps counterintuitively, present even in the first few years of life? To directly test this question, using fMRI multivoxel pattern analysis and a virtual town paradigm, we investigated the representation of location information in the RSC of 5-year-olds. We found that i) the RSC in 5-year-olds already represents the locations of particular places and ii) this neural representation is correlated with their performance on a location task. We also found that RSC not only represents the locations of particular places but also the distance between them – another kind of information necessary for map-based navigation. Finally, the parahippocampal place area (PPA) – a scene-selective region hypothesized to be involved in scene categorization, not map-based navigation – did not represent location information, but instead category information, the exact opposite of RSC. Taken together, these findings reveal the early development of navigationally relevant location information in RSC and thus the early development of map-based navigation.
The work was supported by grants from the National Eye Institute (R01 EY29724 to DDD).
Talk 6, 9:30 am
The Homology and development of the Proto-Word Area in Macaques and the Visual Word Form Area in Humans
Jia Yang1 (), Yipeng Li1, Wenfang Zhang3, Haoxuan Yao1, Jingqiu Luo1, Hongyu Li1, Xiaoya Chen2, Shiming Tang1, Pinglei Bao1; 1Peking University, 2Vanderbilt university, 3China Women’s University
The Visual Word Form Area (VWFA) is believed to develop by repurposing a pre-existing area through literacy, but which specific area is repurposed and why it is chosen remain unclear. Given the presence of similar category-selective regions (e.g., face, body, scene) in both macaques and humans, could the human VWFA develop from a proto-word area that can also be identified in macaques? Using fMRI, we identified word-selective regions spanning from the posterior to anterior inferotemporal (IT) cortex in word-naïve macaques. Widefield calcium imaging and high-density electrophysiological recordings confirmed a high concentration of word-selective neurons in these regions through measuring responses to thousands of words and non-word objects. Additionally, objects similar to words in the object space elicited stronger activity, suggesting that proto-word areas in primates may develop through exposure to such objects. This idea is further supported by simulations using deep neural networks. To examine the homology between the macaque word patch and the human VWFA, we conducted human fMRI experiments, showing that the same object space model could explain human VWFA responses. Additionally, a strong correlation between macaque word patch responses and human VWFA responses to 1000 NSD images further supported the homology between these two areas. While both species' word-selective areas follow the object space model, notable differences persist. By measuring responses to nearby objects, faces, and words in human adults and macaques using fMRI, we observed that the macaque word area favors nearby objects over words, whereas the human VWFA shows the opposite preference. Furthermore, responses from preschool (N = 13) and primary school children (N = 17) revealed a shift from a preference for nearby objects to a preference for words as reading experience increased. This study highlights how the human brain repurposes a homologous word-selective area, identified in macaques, to specifically represent words through literacy.
This work was supported by National Science and Technology Innovation 2030-Major Project 2022ZD0204803 to P. B. Natural Science Foundation of China Grant 32200857, China Postdoctoral Science Foundation Grant 2023M740125, 2022T150021to J.Y.