Similarities and differences between characteristics of feedforward convolutional neural network models and human visual perceptual learning
Poster Presentation: Monday, May 19, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Plasticity and Learning: Perceptual learning
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Tyler Barnes-Diana1, Yuka Sasaki1, Takeo Watanabe1; 1Brown University
Convolutional Neural Networks (CNNs) have been used to replicate a variety of behavioral phenomena in Visual Perceptual Learning (VPL) including learning and location specificity (Cohen & Weinshall 2017, Wenliang & Seitz 2018). Here, to examine similar and different aspects of CNN models from human VPL, we compare the characteristics of CNNs learning to human VPL. In the present study, we tested CNNs via a classic behavioral contrast discrimination protocol inspired by Yu et al 2004. Specifically, we examined whether CNNs can replicate important characteristics of human VPL such as 1) location specificity, 2) feature specificity (i.e., reference contrast), and 3) stimulus ordering effects, whereby random interleaving of different reference contrasts annihilates the learning that would happen in the same amount of training in a blocked design. Two variants of CNN architectures were used (one deep and one shallow) and a hyperparameter search was performed for both architectures. Model criteria were defined based on a human-like time course of learning. In regions where learning met the parameter search criteria both similarities and differences between CNN models and human performance were observed. Similar to human VPL all CNNs demonstrated partial location specificity. However, in contrast to human VPL, no CNN’s demonstrated either specificity or the stimulus ordering effects observed in human VPL. The failure of these CNNs to replicate the feature specificity and stimulus ordering effects observed in human VPL could be due to the absence of a temporal chunking system, whereby temporally adjacent trials can be grouped together into meaningful categories.
Acknowledgements: NIH R01EY019466, R01EY027841, R01EY031705, NSF-BSF BCS2241417