Measuring Trust in Artificial Intelligence with the contralateral delay activity (CDA)
Poster Presentation: Tuesday, May 20, 2025, 2:45 – 6:45 pm, Pavilion
Session: Visual Memory: Neural mechanism of working memory
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Tobias Feldmann-Wüstefeld1, Eva Wiese1; 1Technische Universität Berlin
Visual working memory is crucial for processing information in dynamic and challenging environments, making it a key factor in human-machine interaction. One increasingly significant form of such interaction is with artificial intelligence (AI). Offloading cognitive workload to AI has a great potential for enhancing human performance in complex tasks. However, a critical question in human-AI interaction is the extent to which humans trust the AI. We tested a novel approach to implicitly measure trust in AI with the contralateral delay activity (CDA), a key metric for working memory load. In solo-blocks, participants performed a change detection task by themselves. Participants had to encode items from one hemifield, indicated by a cue, and only that hemifield was probed in the end of a trial. In team-blocks, participants monitored one hemifield while a simple algorithm, framed as AI, monitored the other. If the human side was probed, participants responded immediately (change / no change). If the AI side was probed, the AI suggested selected a response (90% accuracy) and the participant confirmed or overruled the AI. In team-blocks, CDA was generally reduced compared to solo-blocks, showing that participants encoded items from both their and the AI’s side, i.e., they did not offload as much memory load as they could, indicating some level of distrust. Importantly, those participants with the highest self-reported trust showed the smallest CDA amplitudes (the lowest offload) in team-blocks. In a second experiment, the AI’s performance dropped after an initial phase of high accuracy. This led to a general decrease in CDA amplitude, reflecting trust dissolution. When the AI’s performance later recovered, CDA amplitudes varied: some participants regained trust, while others did not. In sum our study shows that the encoding imbalance reflected in the CDA amplitude can be used as an implicit neural marker for trust in AI.
Acknowledgements: This research was supported by the Alexander von Humboldt Foundation