Friday, May 13, 2022, 5:00 – 7:00 pm EDT, Talk Room 1
Organizers: Ömer Dağlar Tanrıkulu1, Arni Kristjansson2; 1Williams College, 2University of Iceland
Presenters: Ömer Dağlar Tanrıkulu, Dobromir Rahnev, Andrey Chetverikov, Robbe Goris, Uta Noppeney, Cristina Savin
The presence of image noise and the absence of one-to-one inverse mapping from images back to scene properties has led to the idea that visual perception is inherently probabilistic. Our visual system is considered to deal with this uncertainty by representing sensory information in a probabilistic fashion. Despite the prevalence of this view in vision science, providing empirical evidence for such probabilistic representations in the visual system can be very challenging. Firstly, probabilistic perception is difficult to operationalize, and has therefore been interpreted differently by various researchers. Second, experimental results can typically be accounted for, in principle, by both probabilistic and non-probabilistic representational schemes. Our goal in this symposium is to evaluate the empirical evidence in favor of (or against) the probabilistic description of visual processing by discussing the potential advantages (and disadvantages) of different methodologies used within vision science to address this question. This symposium will bring together speakers from diverse perspectives, which include computational modeling, neuroscience, psychophysics and philosophy. Our speakers include promising junior researchers, as well as established scientists. In the first talk, Omer Daglar Tanrikulu will provide an introduction with a summary of the main challenges in providing evidence for probabilistic visual representations, as well as his proposal to sidestep these obstacles. Next, Dobromir Rahnev will focus on the difficulties in operationalizing the term “probabilistic perception” and suggest a tractable research direction with illustration of studies from his lab. In the third talk, Andrey Chetverikov will explain and illustrate empirical methodologies in distinguishing between representation of probabilities and probabilistic representations in vision. In the fourth talk, Robbe Goris will present a recently developed methodology to discuss the implications of observers’ estimates of their own visual uncertainty. In the fifth talk, Uta Noppeny will approach the issue from a multisensory perspective and discuss the success of Bayesian Causal Inference models in explaining how our brain integrates visual and auditory information to create a representation of the world. Finally, Cristina Savin will consider probabilistic representations at a mechanistic level and present a novel neural network model implementing Bayes-optimal decisions to account for certain sequential effects in perceptual judgments. Each 15-min talk will be followed by 5-min Q&A and discussion. The speaker line-up highlights the multidisciplinary nature of this symposium which reflects that our target audience is composed of researchers from all areas of vision science. We are confident that researchers at all career stages, as well as the broad audience of VSS, will benefit from this symposium. Students and early-career researchers will have a better understanding of the evidence for, or against, probabilistic visual perception, which will equip them with a perspective to evaluate other research that they will encounter at VSS. More importantly, such discussion will help both junior and senior scientists to draw their implicit assumptions about this important topic to the surface. This, in turn, will allow the general vision community to determine research directions that are more likely to increase our understanding of the probabilistic nature of visual processing.
Presentations
How can we provide stronger empirical evidence for probabilistic representations in visual processing?
Ömer Dağlar Tanrıkulu1; 1Cognitive Science Program, Williams College, MA, USA
Probabilistic approaches to cognition have had great empirical success, especially in building computational models of perceptual processes. This success has led researchers to propose that the visual system represents sensory information probabilistically, which resulted in high-profile studies exploring the role of probabilistic representations in visual perception. Yet, there is still substantial disagreement over the conclusions that can be drawn from this work. In the first part of this talk, I will outline the critical views over the probabilistic nature of visual perception. Some critics underline the inability of experimental methodologies to distinguish between perceptual processes and perceptual decisions, while others point to the successful utilization of non-probabilistic representational schemes in explaining these experimental results. In the second part of the talk, I will propose two criteria that must be satisfied to provide empirical evidence for probabilistic visual representations. The first criterion requires experiments to demonstrate that representations involving probability distributions are actually generated by the visual system, rather than being imposed on the task by the experimenter. The second criterion requires the utilization of structural correspondence (as opposed to correlation) between the internal states of the visual system and stimulus uncertainty. Finally, I will illustrate how these two criteria can be met through a psychophysical methodology using priming effects in visual search tasks.
The mystery of what probabilistic perception means and why we should focus on the complexity of the internal representations instead
Dobromir Rahnev1; 1School of Psychology, Georgia Institute of Technology, Atlanta, GA
Two years ago, I joined an adversarial collaboration on whether perception is probabilistic. The idea was to quickly agree on a precise definition of the term “probabilistic perception” and then focus on designing experiments that can reveal if it exists. Two years later, we are still debating the definition of the term, and I now believe that it cannot be defined. Why the pessimism? At the heart of probabilistic perception is the idea that the brain represents information as probability distributions. Probability distributions, however, are mathematical objects derived from set theory that do not easily apply to the brain. In practice, probabilistic perception is typically equated with “having a representation of uncertainty.” This phrase ultimately seems to mean “having a representation of any information beyond a point estimate.” Defined this way, the claim that perception is probabilistic borders on the trivial, and the connection to the notion of probability distributions appears remote. I no longer think that there is a way forward. Indeed, in empirical work, the term probabilistic perception seems to serve as a litmus test of how researchers feel about Bayesian theories of the brain rather than a precise hypothesis about the brain itself. What then? I argue that the question that is both well-posed and empirically tractable is “How complex is the perceptual representation?” I will briefly review what we know about this question and present recent work from my lab suggesting that perceptual representations available for decision-making are simple and impoverished.
Representations of probabilities and probabilistic representations
Andrey Chetverikov1, Arni Kristjansson2; 1Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands, 2Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
Both the proponents and the opponents of probabilistic perception draw a distinction between representations of probabilities (e.g., the object I see is more likely to have orange hues than green) and probabilistic representations (this object is probably an orange and not an apple). The former corresponds to the probability distribution of sensory observations given the stimulus, while the latter corresponds to the opposite, the probabilities of potential stimuli given the observations. This dichotomy is important as even plants can respond to probabilistic inputs presumably without making any inferences about the stimulus. It is also important for the computational models of perception as the Bayesian observer aims to infer the stimulus, not the observations. It is then essential to evaluate the empirical evidence for probabilistic representations and not the representation of probabilities to answer the question posed by this symposium. However, is it possible to empirically distinguish between the two? We will discuss this question using the data from our recent work on probabilistic perception as an illustration.
Quantifying perceptual introspection
Robbe Goris1; 1Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
Perception is fallible, and humans are aware of this. When we experience a high degree of confidence in a perceptual decision, it is more likely to be correct. I will argue that our sense of confidence arises from a computation that requires direct knowledge of the uncertainty of perception, and that it is possible to quantify the quality of this knowledge. I will introduce a new method to assess the reliability of a subject’s estimate of their perceptual uncertainty (i.e., uncertainty about uncertainty, which I term “meta-uncertainty”). Application of this method to a large set of previously published confidence studies reveals that a subject’s level of meta-uncertainty is stable over time and across at least some domains. Meta-uncertainty can be manipulated experimentally: it is higher in tasks that involve more levels of stimulus reliability across trials or more volatile stimuli within trials. Meta-uncertainty appears to be largely independent of task difficulty, task structure, response bias, and attentional state. Together, these results suggest that humans intuitively understand the probabilistic nature of perception and automatically evaluate the reliability of perceptual impressions.
Constructing a representation of the world across the senses
Uta Noppeney1; 1Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
Our senses are constantly bombarded with myriads of diverse signals. Transforming this sensory cacophony into a coherent percept of our environment relies on solving two computational challenges: First, we need to solve the causal inference problem – deciding whether signals come from a common cause and thus should be integrated, or come from different sources and be treated independently. Second, when there is a common cause, we should integrate signals across the senses weighted in proportion to their sensory precisions. I discuss recent research at the behavioural, computational and neural systems level investigating how the brain combines sensory signals in the face of uncertainty about the world’s causal structure. Our results show that the brain constructs a multisensory representation of the world approximately in line with Bayesian Causal Inference.
Sampling-based decision making
Cristina Savin1; 1Center for Neural Science, Center for Data Science, New York University, New York, NY
There is substantial debate about the neural correlates of probabilistic computation (as evidenced in a Computational Cognitive Neuroscience – GAC 2020 workshop). Among competing theories, neural sampling provides a compact account of how variability in neuron responses can be used to flexibly represent probability distributions, which accounts for a range of V1 response properties. As samples encode uncertainty implicitly, distributed across time and neurons, it remains unclear how such representations can be used for decision making. Here we present a simple model for how a spiking neural network can integrate posterior samples to support Bayes-optimal decision making. We use this model to study behavioral and neural consequences of sampling based decision making. As the integration of posterior samples in the decision circuit is continuous in time, it leads to systematic biases after abrupt changes in the stimulus. This is reflected in behavioral biases towards recent history, similar to documented sequential effects in human decision making, and stimulus-specific neural transients. Overall, our work provides a first mechanistic model for decision making using sampling-based codes. It is also a stepping stone towards unifying sampling and parametric perspectives of Bayesian inference.