The Effect of Scene Clutter on Visual Representations

Poster Presentation: Tuesday, May 20, 2025, 2:45 – 6:45 pm, Pavilion
Session: Scene Perception: Categorization, memory, clinical, intuitive physics, models

Stefania Bracci1, Davide Cortinovis2, Enrico Guarnuto3; 1CIMeC, Trento University

In the field of object perception, the focus has traditionally been on explicit object dimensions such as object category, animacy or real-word size, often overlooking the complexity of the visual environment in which the objects are embedded. As an example, to control for possible confounds, most studies use images of objects without any background. However, our visual perception always deals with complex and extremely cluttered visual environments. This neuroimaging study explores how variations in the clutter of a scene influences object-related dimensions such as animacy and real-word size, typically represented in the ventral visual pathway. For this purpose, we created a set of images where each stimulus was presented either as a single object on a background (e.g., a butterfly) or as an object ensemble (e.g., many butterflies). In addition, animacy and real-world size were orthogonalized thus allowing to test the influence of scene ensembles on each feature space separately. Results revealed an interesting dissociation in regions encoding objects and scenes. Whereas, in object-selective areas, the animacy dimension was strongly represented in the single object condition, it did not reach significance in the object ensembles condition. On the contrary, in scene-selective areas, object size was encoded in the object ensemble condition but not in the single object condition. Together, this double dissociation suggests that the feature spaces encoded in the visual cortex are shaped by the interaction of both (1) the regional computational goal (e.g., scene processing) and (2) the visual properties of the images.