The impact of font on typo detection: a novel visual search paradigm

Poster Presentation: Saturday, May 17, 2025, 2:45 – 6:45 pm, Banyan Breezeway
Session: Object Recognition: Reading

Emily Heffernan1, Benjamin Wolfe1, Anna Kosovicheva1; 1University of Toronto

In visual search tasks, we scan our environment to identify a target (e.g., looking for your suitcase at the Tampa airport baggage claim). A better understanding of the factors that lead to success and failure in visual search requires a task that mimics our lived experiences but permits a high degree of experimental control. Here, we developed a novel typo detection paradigm to uncover how attentional processes interact with visual properties of stimuli in a word-based visual search task. Participants (N=9) scanned pseudo-paragraphs comprised of random 5-, 6-, and 7-letter words to find typos (i.e., incorrectly spelled words), which were present on 50% of trials. Five types of typos were included: transpositions (two letters swapped), insertions (a letter added to a word), deletions (a letter removed from a word), repetitions (a letter repeated), and substitutions (a letter replaced with another letter). In addition, half of the trials were presented in an “easy-to-read” font (Arial) and half were presented in a “hard-to-read” font (a version of Roboto Flex with narrow width and a high stroke contrast). Font had a main effect on reaction time: participants responded more slowly when the stimuli were presented in the hard-to-read font. Font had no overall impact on accuracy. However, participants were worst at identifying transposition errors, and for these trials, font did have a significant effect, such that performance was lower for the hard- versus easy-to-read font. These findings also highlight substantial individual differences in performance and sensitivity to font manipulation. Taken together, these results indicate that the appearance of text does impact visual search for typos, but only for specific types of errors. This paradigm can elucidate how constraints in peripheral vision (e.g., crowding) impact visual search performance for text-based stimuli.

Acknowledgements: Funding Acknowledgments: This work was supported by a SSHRC Insight Grant to BW and AK.