Weight-similarity Topographic Networks Improve Retinotopy and Noise Robustness
Talk Presentation: Sunday, May 18, 2025, 10:45 am – 12:30 pm, Talk Room 2
Session: Object Recognition: Models
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Nhut Truong1 (), Uri Hasson1; 1University of Trento
Typical deep neural networks (DNNs) lack spatial organization and a concept of unit adjacency. In contrast, topographic DNNs (TDNNs) spatially organize units, and are therefore potential spatio-functional models of cortical organization. In previous work, this spatial organization was achieved by adding a loss term that encourages adjacent neurons to exhibit similar activation patterns (activation-similarity, AS-TDNN). However, this optimization is not biologically grounded, and ideally, these correlations should arise naturally as a consequence of biologically motivated constraints. This led us to develop a new type of TDNN, whose training is grounded in the biologically-inspired principle that spatially adjacent units should have similar afferent (incoming) synaptic strength, modeled by similar incoming weight profiles (weight-similarity, WS-TDNN). Using hand-written digit classification (MNIST) as a test domain, we compared the properties of AS-TDNNs, WS-TDNNs, and a control (non-topographic) DNN. Both AS-TDNNs and WS-TDNNs were tested under six different weighting levels for the spatial loss term. While all models achieved nearly identical classification accuracy, WS-TDNNS showed several positive advantages, including greater robustness to several types of noise, greater resistance to node ablation, and higher unit-level activation variance. Interestingly, WS-TDNNs produced higher correlations between adjacent units than AS-TDNNs, even though the latter were explicitly trained on this objective. Importantly, when tested using standard retinotopy protocols (i.e., rotating wedge and eccentric ring stimuli), WS-TDNNs, but not AS-TDNNs, naturally produced angular and eccentricity-based spatial tuning. This was evident in the smooth transitions in units’ preferred angles and spatial grouping by preferred eccentricity. Moreover, these properties naturally emerged through end-to-end training, without requiring separate pre-optimization steps required in recent studies. These results were also replicated using the CIFAR-10 dataset for object recognition. Overall, our results suggest that TDNNs trained with weight-similarity constraints are viable computational models for visual cortical organization.