Wide Learning: Building Semantic Networks and Differentiating Between Semantic and Visual Representations in Long-Term Memory

Poster Presentation: Monday, May 19, 2025, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Imagery, long-term

Yael Schems Maimon1,2, Yoed Kenett3, Adva Shoham1,4, Ayala Allon, Galit Yovel1,5, Roy Luria1,5; 1Tel Aviv University, 2Minducate Science of Learning Research and Innovation Center, 3Faculty of Data & Decision Sciences, Technion, 4Sieratzki Institute for Advances in Neuroscience, 5Sagol School of Neuroscience, Tel Aviv University

It has long been established that representations in long-term memory (LTM) are semantically organized, with related words represented ‘closer’ in LTM. Importantly, previous LTM research either tapped its’ episodic properties (e.g., using lists of unrelated words), or relied on the existing semantic network (by presenting a list of related words). The current research introduced Wide Learning, a novel paradigm that exposed participants to rich details regarding objects from an unfamiliar category (beekeeping or skippering), using texts, videos and images. Learning was assessed by measuring object naming, familiarity ratings, semantic similarity ratings between each word pair, visual similarity ratings between each image pair and story writing. Critically, this type of learning was found to effectively create a semantic network connecting all learned objects. Additionally, this network was connected to existing related LTM representations, while disconnected from irrelevant concepts. This semantic network was absent before learning, and for unfamiliar words that were not learned. Moreover, the results revealed a strong reliance on the emerging semantic network in the story-writing task. While a great part of the learning phase was visual (e.g., images and videos), we further demonstrated that learning was semantic rather than visual. We generated pure visual representations of the objects using a visual deep neural network (DNN) based on their images, and pure semantic representations using a language DNN based on their dictionary definitions. When examining the similarity ratings, the semantic contribution (assessed using the language model) significantly increased for semantic ratings after learning. However, the visual ratings remained unchanged. Overall, this study introduced a novel paradigm, Wide Learning, which revealed the formation of semantic networks in LTM after learning, further demonstrating how we rely on these semantic networks in natural behavior.

Acknowledgements: Funding for this research was provided by the Minducate Science of Learning Research and Innovation Center of the Sagol School of Neuroscience, Tel Aviv University, and by the Ariane de Rothschild Women Doctoral scholarship to K.T.