Feature- versus object-based attentional templates during feature, conjunction, and object search

Poster Presentation: Monday, May 19, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Attention: Features, objects

Rai Samar Ghulam Bari1, Ziyi Wang1, Anna Grubert1; 1Durham University

Visual search is guided by attentional templates, i.e., target representations held in visual working memory. The content of such templates is still under debate, especially when targets are defined by combinations of task-relevant features. Previous research showed that target templates during conjunction search are feature- rather than object-based. However, these studies often compared guidance during feature versus conjunction search but failed to include (true) object-based control conditions. In this study, we systematically compare attentional guidance (as indexed by the N2pc component of the event-related potential) during feature, conjunction, and object search. We developed a stimulus set with 81 naturalistic object images (9 different object categories, each with 9 members defined by different combinations of 3x3 colours and shapes). Target identities were kept constant in each block in Experiment 1 but were cued on a trial-by-trial basis in Experiment 2 to increase the perceptual strength of the target representations and minimise potential long-term memory effects. In different tasks, participants searched for targets with a specific colour or shape (feature search), colour/shape combination (conjunction search), or object category (object search). Results were consistent in both experiments. Reaction times were fastest in the feature search and slowest in the object search with an intermediate conjunction search. N2pc amplitudes and latencies were identical in feature and conjunction search and partially matching distractors captured attention in the conjunction task. This mirrors previous findings that have demonstrated initial feature-based guidance during conjunction search. In contrast, and even though stimuli were physically identical, N2pc amplitudes were attenuated and N2pc latencies were delayed during object search. Furthermore, partially matching distractors were completely ignored. This suggests that attentional templates can contain holistic object-representations if required by task demands but that attentional guidance by such object-based templates is less efficient than feature-based guidance.

Acknowledgements: This work was funded by research grants of the Leverhulme Trust (RPG-2020-319) awarded to AG.