Reliance on Anchor Objects as Spatial Cues Increases Under Low Visibility
Poster Presentation: Sunday, May 18, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Attention: Spatial
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Makayla Souza-Wiggins1 (), Joy J. Geng1; 1University of California, Davis
Much of what we know about visual search in naturalistic scenes comes from images taken under high-visibility conditions. However, real-world searches often occur when visibility is low—such as at night, in rain, or in dimly lit rooms. We hypothesized that in these situations, the ability to use target information to guide visual search is limited, forcing observers to rely more on “anchor” objects (i.e., large objects that predict the locations of smaller related objects). If true, targets should be difficult to find when visibility is low and they appear in unexpected locations relative to their anchors. In Study 1 (N=161), we validated anchor-target spatial predictions by asking participants to indicate where target objects (e.g., a dish sponge) belong within scene images created in Unity. Their responses defined spatially congruent locations (e.g., a sponge on a sink) and incongruent ones (e.g., a sponge on a stove). In Study 2 (N=38), targets appeared in four conditions: congruent or incongruent spatial locations within high- or low-visibility scenes. Visibility was controlled by adjusting the lighting within the Unity models. Each trial concluded with a spatial memory probe, where participants clicked on a blank screen, indicating the target's location. As predicted, target search times were slower in low-visibility scenes, with the slowest times occurring when targets also appeared in incongruent locations. The effect of spatial congruency on the memory probe was also more pronounced in low-visibility scenes. These findings suggest that when low visibility hinders target detection, we rely more on anchor objects as proxies to guide attention. This dependence produces a larger spatial congruency effect during search and memory recall, as expectations based on prior knowledge compensate for perceptual limitations. These results highlight how attentional strategies adapt to visibility constraints, advancing our understanding of how visual search occurs in real-world settings.
Acknowledgements: NEI T32 Vision Training Grant