Quantifying the similarity of neural representations using decision variable correlation

Poster Presentation: Tuesday, May 20, 2025, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Object Recognition: Models

Yu Qian1 (), Wilson Geisler1, Xuexin Wei1; 1University of Texas at Austin

Previous studies have compared the representations of macaque visual cortex and deep vision-based neural networks. Intriguingly, while some suggest that their representations are highly similar, others argued the opposite. To investigate this question, we develop a method to quantify the trial-by-trial similarity of a neural network and the brain, leveraging the predicted behavior from their internal representations. Our technique is based on decision variable correlation (DVC). DVC was originally developed to infer how correlated the decision variables of two observers are based on their behavior in binary choice tasks. We generalize the method to deal with neural representations. The key idea is to first use an optimal linear classifier to convert the population activity into a decision variable, and then compute the Pearson correlation of the decision variables. To address the under-estimation of DVC caused by noise, we further developed a technique to normalize the estimate using the noise ceiling. We compared ours with a previous used method based on Cohen’s kappa. We apply our method to study the similarity of monkey inferior temporal cortex (IT) and deep networks. We use a public dataset of IT collected when monkeys viewing images of objects. We find that the DVC between the monkeys are high. The DVC between the monkey and the networks is generally lower than that between the two monkeys. Interestingly, the similarity of different networks are not as large as previously reported. Additionally, the better performing neural networks appear to be less similar to monkeys, based on the inferred DVC. Our study provides a new way to evaluate the similarity of two neural representations based on their implied behavior under a linear readout. The technique is general and should be applicable to other datasets as well.