Spatial-scale invariant properties of visual cortex in mammals

Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Theory

Raj Magesh Gauthaman1, Brice Ménard1, Michael Bonner1; 1Johns Hopkins University

How does visual cortex encode information about the visual world in the coordinated activity of hundreds of millions of neurons? Our recent work analyzing a large-scale fMRI dataset containing neural responses to natural scene images has demonstrated that cortical representations of natural images are high-dimensional and exhibit scale-free covariance structure. Surprisingly, this characteristic statistical signature is not only universal across low- and high-level regions of human visual cortex, but is also observed in the population activity of single neurons in mouse primary visual cortex (V1). What properties of cortical population codes allow us to observe the same statistical structure at the level of single neurons in mice and voxels in humans? To investigate this question, we analyze two datasets: the Natural Scenes human fMRI dataset and a large-scale mouse calcium imaging dataset, both containing V1 responses to natural images, but measured at dramatically different resolutions ranging from single neurons (~20 μm) to voxels containing ~10⁵ neurons (1.8 mm). Using a cross-decomposition estimator, we confirm that stimulus-related variance is distributed as a power law along all available latent dimensions (>10³) in both humans and mice. In fact, these latent dimensions are patterned on the cortical surface with characteristic spatial scaling: high-variance dimensions vary on coarse scales while low-variance dimensions vary on fine scales. Crucially, we discover a stable power-law relationship between variance and spatial scale that is identical across both mammalian species. Together, this remarkable universality in the covariance statistics of human and mouse V1 population activity suggests a generic encoding principle of visual cortex. More broadly, this result explains why studying cortical responses using neuroimaging at spatial scales far removed from single neurons reveals interesting principles of visual encoding -- the statistics of visual responses are self-similar across many orders of magnitude of spatial scale.

Acknowledgements: This research was supported in part by a Johns Hopkins Catalyst Award to MFB, Institute for Data Intensive Engineering and Science Seed Funding to MFB and BM, and grant NSF PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP).