Perceptual Organization: Objects, events, ensembles
Talk Session: Sunday, May 18, 2025, 2:30 – 4:30 pm, Talk Room 1
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 2:30 pm
The Crowd Size Illusion
Gabriel Waterhouse1, Sami Yousif1; 1University of North Carolina at Chapel Hill
After several relaxing days on the beach, you step up to the podium to give your talk. Looking out on the audience, you find that some, but not all, of the seats are occupied. The crowd looks empty. But what if your impression is an illusion? What if the number you perceive is influenced not just by the number of filled seats, but the number of empty ones? We tested this putative “crowd size illusion” by having participants compare and estimate the numbers of dots in displays with and without visible grids: Some displays contained random arrangements of dots, whereas others contained dots arranged within “cells” in a grid (like people, in seats). When only about 15-30% of the “seats” were occupied, people tended to find the display with the grid to have fewer dots (consistent with the intuition described above). In a second experiment, we replicated this finding in a direct estimation task. But does the presence of a grid always result in underestimation? Imagine the same scenario as before, except the audience is full to the brim. In that case, might you perceive more people? In a third experiment, we found that when occupancy of the grids was high, those displays were perceived as more numerous. This effect is continuous: Observers underestimated the number of dots when more “seats” were empty and overestimated when more “seats” were full. Both illusions are powerful enough that they are readily appreciated in simple demonstrations. Furthermore, the fact that the direction of the illusion depends on the percentage of occupied cells indicates that this illusion cannot be explained by confounds with continuous spatial properties. In this way, the crowd size illusion is more than a curiosity: It points to a number system that represents number in a part-whole format.
Talk 2, 2:45 pm
Aftereffects of numerosity are caused by density and size adaptation
Frank H. Durgin1, Abigail Love1, Grace Taylor1; 1Swarthmore College
DeSimone et al. (2021) sought to show that number aftereffects could be distinguished from density aftereffects by using an adapter that was more numerous, but lower in density than the target display. (It was also larger.) They interpreted downward aftereffects in the perception of numerosity as evidence that number was directly adapted, independent of density. However, size adaptation is also known to affect perceived number (Zimmermann & Fink, 2016), and the size ratio used by DeSimone et al. was nearly 3 to 1, while the density ratio of their adapter was only 0.7 to 1, a ratio which produces essentially no upward density adaptation (Sun et al., 2017). Thus, the downward numerosity aftereffect observed may have been due to size adaptation alone. We used the same test stimulus that DeSimone et al. used (30 dots in a circle of 10 deg2, centered 3.5° from fixation), but we varied the adapters more systematically and directly measured matches for perceived size and perceived density as well as for perceived number after adaptation. In the critical conditions in which adapter size was 3x that of the test stimulus, and adapter density was 1/3 that of the test stimulus perceived area was decreased by about 15%, while perceived density was increased about 20%, and perceived number was increased slightly, but reliably, as well (by about 5%, as predicted by combining size and density effects). Thus, DeSimone et al.’s (2021) evidence of downward “number” adaptation was likely due to downward size adaptation (to the 3:1 size ratio) in the absence of any upward density adaptation. This new preregistered observation shows that number adaptation is not easily dissociated from aftereffects of adaptation to patch size and to dot density.
Talk 3, 3:00 pm
When are dot arrays perceived as shapes? Evidence from configural superiority paradigms
Nicholas Baker1 (), Wiliam Friebel1, Alexa Vushaj1, Peyton Daly1, Madeline Geittmann1, Mihika Tewari1; 1Loyola University of Chicago
The visual system is so attuned to shape that even sparse arrays of disconnected dot elements (such as star constellations) are sometimes perceived as contours. Because they contain so little information, arrays of dots may be the minimum signal necessary to give rise to a shape percept. Understanding what relations between dots result in a perceived shape offers insight into the process by which the visual system forms shape representations from sensory elements. One of the challenges of this research is that it is phenomenological: Whether an array of dots appears to be a shape is a question of an observer’s subjective experience. We made use of a long history of perceptual organization research showing configural advantage effects to quantify the degree to which dot arrays are perceived as shapes using objective tasks. We used two well-known paradigms: object-attracted attention in visual search (Kimchi et al., 2016) and configural superiority effects (CSE) (Pomerantz et al., 1977). We began by testing these methods on a manipulation known to affect shape perception: a shape’s angularity (Baker & Kellman, 2024). In Experiment 1, participants identified the orientation of a target within or outside an array of dots sampled from a smooth vs. angular shape. In Experiment 2, participants identified a target with either 12 dots alone or with a noninformative context added that completed the shape of the target. Both paradigms revealed greater configural advantages for smooth contours. We then tested another possible factor in arrays’ perceived shapehood: the ratio of a contour’s area to its perimeter. We repeated both experiments, this time comparing dots sampled from shapes with high area: perimeter ratios with dots sampled from shapes with low area: perimeter ratios. Both experiments showed a greater configural advantage for shapes with larger ratios of area to perimeter.
Talk 4, 3:15 pm
'Visual verbs' drive adaptive predictions: Perception of dynamic event types spontaneously changes visual working memory encoding
Huichao Ji1 (), Brian Scholl1; 1Yale University
We see the world not only in terms of specific features (such as the color or shape of a ball), but also in terms of a foundational set of abstract/categorical 'event types' (such as a ball bouncing vs. rolling). Recent work has demonstrated that such categorical perception occurs spontaneously during passive viewing of visual scenes, even when verbal encoding is discouraged or disrupted: observers are better able to detect changes across different event types, even when the magnitudes of within-type changes (e.g. across two different animations of bouncing) are objectively greater. Why might this occur? Here we explored the possibility that such spontaneous categorical encoding is adaptive, insofar as it enables differential predictions about likely future states, and so changes what is encoded into memory. This was inspired by the idea that the purpose of perception is not only to characterize the present ("What's out there?") but also to predict the future ("What's about to happen?"). We studied this in a single-trial memory task, e.g. when contrasting bouncing vs. rolling animations: observers viewed a single animation of a ball moving, and then simply reported its final position (after the video had ended and the display had disappeared). This placement was systematically biased by the underlying event type: rolling balls tended to be localized as further back in their actual trajectories horizontally (but not vertically), compared to bouncing balls -- presumably because a bouncing ball can only move forward in the coming moments, while a rolling ball could roll backwards down a ramp. And careful controls showed that this depended on the event-type itself, rather than any lower-level properties (such as the details of the trajectories). This shows how representations of 'visual verbs' might drive adaptive predictions about how a dynamic world is likely to unfold.
Talk 5, 3:30 pm
Dissociating external features from internal structures in visual segmentation of actions
Zekun Sun1 (), Samuel McDougle1,2; 1Department of Psychology, Yale University, 2Wu Tsai Institute, Yale University
The human mind tends to represent continuous experience as discrete events, imposing “event boundaries” on incoming streams of sensory data. This phenomenon, known as event segmentation, is not just a function of top-down decisions about where events begin and end – event boundaries appear to structure attention and perception as well. How are event boundaries represented perceptually? Previous studies on event structure typically employ remarkable changes of physical features at event boundaries, e.g., walking through a doorway, large shifts in objects, figures and scenes, salient motion cues, and disruptions of visual statistics. This raises an important question: Is the perceptual representation of an event boundary exclusively driven by processing salient, lower-level physical changes and motion dynamics? Or might higher-level semantic structure also shape how we perceive event boundaries? Here we attempt to disentangle low-level spatiotemporal features of continuous visual input versus high-level representations of natural action structure. Across six pre-registered experiments, we asked participants to detect subtle disruptions in 20 short, individual actions (e.g., kicking a ball, stepping over an obstacle, throwing a frisbee, etc., generated as motion-capture-based simple animations, static images, and point-light biological motion displays), which were presented either in a recognizable intact form or in a distorted manner that only preserved low-level spatiotemporal dynamics and visual features. Results consistently demonstrated that visual detection of subtle disruptions was weaker at action boundaries (i.e., the transition between discrete steps within the action) relative to non-boundaries, extending previous findings to naturalistic action perception. Crucially, these perceptual effects were driven by both lower-level visual features and by high-level information of action structure. Thus, automatic and rapid perceptual segmentation of actions is likely structured in time by both external cues inherent to the stimulus and our internal models of the world.
This work is supported by NIH grant R01 NS13292
Talk 6, 3:45 pm
Discrete vs. continuous timer bars: How visual segmentation shapes the perception of time "running out"
Jasmindeep Kaur1, Jiaying Zhao1, Joan Danielle K. Ongchoco1; 1The University of British Columbia
Our lives are flooded with visual reminders of time slipping away — from ticking clocks to countdowns timers, that all depict a sense of time “running out”. In time perception, the same duration can feel longer or shorter as a function of various factors (e.g., attention, predictability) — but we know less about the factors that influence the perception of how much time is left. In visual processing, a key discovery is that while sensory input may be a continuous wash of light, what we experience — what the mind parses — are discrete objects and events. Here we explored how discreteness structures our sense of time running out. Observers completed a multi-item localization (MILO) task, where they clicked on multiple targets in a sequence. In every trial, there was a black-bordered rectangular ‘timer-bar’ initially filled with a color that emptied over a period (e.g., 3 seconds) to visually depict the passage of time. The color diminished either *continuously*, gradually and evenly depleting throughout, or *discretely*, in which the bar was segmented into discrete chunks that disappeared at regular intervals. To measure perceived urgency of time ‘running out’, we examined inter-click latencies (i.e., the time between clicks). Results revealed longer inter-click latencies for discrete (compared to continuous) timer-bars, suggesting greater urgency in the continuous case. This difference disappeared in a separate experiment, where the bar was instead filled over time continuously or discretely, with a reliable interaction between experiments — suggesting that effects could not simply have been a function of one condition being more distracting than another. Thus, discreteness may have distinct effects on our sense of time running out versus time accumulating. Segmentation in visual depictions of time depletion may make time feel more “manageable,” altering our sense of urgency in time-sensitive tasks.
Talk 7, 4:00 pm
Processing Fluency Mediates Trust in Data Visualizations
Hamza Elhamdadi1, Suyeon Seo2, Lace Padilla3, Cindy Xiong Bearfield2; 1University of Massachusetts Amherst, 2Georgia Institute of Technology, 3Northeastern University
Trust plays a significant role in how people perceive scientific information and make critical decisions. Information can be discounted or dismissed without mutual trust between the audience and the presenter. Therefore, establishing trust is a critical first step in visual data communication. Drawing on theories of visual perception, we investigate the role of processing fluency—the ease with which visual stimuli are encoded and processed—in shaping trust in visualizations, using scatter plots as our case study. Through two empirical studies, we demonstrate that visualization design can impact processing fluency, leading to altered trust judgments. In Experiment 1, we validated perceptual fluency manipulations using scatterplots through design manipulations based on prior perception and visualization research, such as adding gridlines, introducing blur, or varying data mark transparency. Participants completed a perceptual task estimating the proportion of points in a specific range and rated task difficulty. We found that fluent visualizations yielded higher accuracy and lower difficult ratings, while manipulations to create disfluent visualizations led to worse performances. In Experiment 2, we created a decision task based on trust games adapted from behavioral economics. Participants allocated resources between two hypothetical companies, each presenting their investment strategies using a scatter plot. We manipulated the relative processing fluency of the plots and found that participants tended to allocate fewer resources to the company presenting data with a disfluent plot. These findings highlight the critical role of perceptual processing in trust and suggest that optimizing processing fluency of data visualizations can enhance their perceived trust and their ability to effectively communicate.
NSF IIS-223758 and IIS-2311575