Bayesian models applied to perceptual
Friday, May 9, 2008, 3:30 - 5:30 pm
Royal Palm 4
Peter Battaglia (University
Alan Yuille (University
of California Los Angeles), David Knill (University
of Rochester), Paul Schrater (University
of Minnesota), Tom Griffiths (University
of California, Berkeley), Konrad
Koerding (Northwestern University),
Peter Battaglia (University of Minnesota)
This symposium will
provide information and methodological tools for researchers who are
interested in modeling perception as probabilistic inference, but are
unfamiliar with the practice of such techniques.
In the last 20 years, scientists characterizing perception as
Bayesian inference have produced a number of robust models that explain
observed perceptual behaviors and predict new, unobserved behaviors.
Such successes are due to the formal, universal language of Bayesian
models and the powerful hypothesis-evaluation tools they allow.
Yet many researchers who attempt to build and test Bayesian models
feel overwhelmed by the potentially steep learning curve and abandon their
attempts after stumbling over unintuitive obstacles.
It is important that those scientists who recognize the explanatory
power of Bayesian methods and wish to implement the framework in their own
research have the tools, and know-how to use them, at their disposal.
This symposium will provide a gentle introduction to the most
important elements of Bayesian models of perception, while avoiding the
nuances and subtleties that are not critical.
The symposium will be geared toward senior faculty and students
alike, and will require no technical prerequisites to understand the major
concepts, and only knowledge of basic probability theory and experimental
statistics to apply the methods.
Those comfortable with Bayesian modeling may find the symposium interesting,
but the target audience will be the uninitiated.
The formalism of
Bayesian models allows a principled description of the processes that allow
organisms to recover scene properties from sensory measurements, thereby
enabling a clear statement of experimental hypotheses and their connections
with related theories. Many
people believe Bayesian modeling is primarily for fitting unpleasant data
using a prior: this is a
misconception that will be dealt with!
In previous attempts to correct such notions, most instruction about
probabilistic models of perception falls into one of two categories:
qualitative, abstract description, or quantitative, technical
application. This symposium
constitutes a hybrid of these categories by phrasing qualitative
descriptions in quantitative formalism.
Intuitive and familiar examples will be used so the connection
between abstract and practical issues remains clear.
The goals of this
symposium are two-fold: to
present the most current and important ideas involving probabilistic
perceptual models, and provide hands-on experience working with them.
To accomplish these goals, our speakers will address topics such as
the history and motivation for probabilistic models of perception, the
relation between sensory uncertainty and probability-theoretic
representations of variability, the brainís assumptions about how the world
causes sensory measurements, how to investigate the brainís internal
knowledge of probability, framing psychophysical tasks as
perceptually-guided decisions, and hands-on modeling tutorials presented as
Matlab scripts that will be made available for download beforehand so those
with laptops can follow along.
Each talk will link the conceptual material to the scientific interests of
the audience by presenting primary research and suggesting perceptual
problems that are ripe for the application of Bayesian methods.
Modeling Vision as Bayesian Inference: Is
it Worth the Effort?
The idea of perception
as statistical inference grew
out of work in the 1950s in the context of a general theory of auditory and
visual signal detectability.
Signal detection theory from the start used concepts and tools from Bayesian
Statistical Decision theory that are with us today:
1) a generative model that specifies the probability of sensory data
conditioned on signal states; 2) prior probabilities of those states; 3) the
utility of decisions or actions as they depend on those states.
By the 1990s, statistical inference models
were being extended to an increasingly wider set of problems,
including object and motion perception, perceptual organization, attention,
reading, learning, and motor control. These applications have relied in part
on the development of new
concepts and computational methods to analyze and model more
realistic visual tasks. I
will provide an overview of current
work, describing some of the success stories. I will try to identify
future challenges for testing and modeling theories of visual
behavior--research that will require learning, and computing probabilities
on more complex, structured representations.
Bayesian modeling in the context of
robust cue integration
Building Bayesian models
of visual perception is becoming increasingly popular in our field.
Those of us who make a living constructing and testing Bayesian
models are often asked the question, "What good are models that can be fit
to almost any behavioral data?" I will address this question in two ways:
first by acknowledging the ways in which Bayesian modeling can be
misused, and second by outlining how Bayesian modeling, when properly
applied, can enhance our understanding of perceptual processing. I will use
robust cue integration as an example to illustrate some ways in which
Bayesian modeling helps organize our understanding of the factors that
determine perceptual performance, makes predictions about performance, and
generates new and interesting questions about perceptual processes.
Robust cue integration characterizes the problem of how the brain
integrates information from different sensory cues that have unnaturally
large conflicts. To build a
Bayesian model of cue integration, one must explicitly model the world
processes that give rise to such conflicting cues.
When combined with models of internal sensory noise, such models
predict behaviors that are consistent with human performance.
While we can "retro-fit" the models to the data, the real test of our
models is whether they agree with what we know about sensory processing and
the structure of the environment (though mismatches may invite questions
ripe for future research). At
their best, such models help explain how perceptual behavior relates to the
computational structure of the problems observers face and the constraints
imposed by sensory mechanisms.
Bayesian models for sequential decisions
perceptually-guided actions, like saccades and reaches, requires our brains
to overcome uncertainty about the objects and geometry relevant to our
actions (world state), potential consequences of our actions, and individual
rewards attached to these consequences.
A principled approach to such problems is termed "stochastic-optimal
control", and uses Bayesian inference to simultaneously update beliefs about
the world state, action consequences, and individual rewards.
Rational agents seek rewards, and since rewards depend on the
consequences of actions, and those consequences depend on the world state,
updating beliefs about all three is necessary to acquire the most reward
Consider the example of
reaching to grasp your computer mouse while viewing your monitor.
Some strategies and outcomes for guiding your reach include:
1.) keeping your eyes fixed, moving quickly, and probably missing the
mouse, 2.) keeping your eyes fixed, moving slowly, and wasting time
reaching, 3.) turning your head, staring at the mouse, wasting time moving
your head, or 4.) quickly saccading toward the mouse, giving you enough
positional information to make a fast reach without wasting much time.
This example highlights the kind of balance perceptually-guided
actions strike thousands of times a day:
information-gathering and action-execution when there are costs (i.e. time,
missing the target) attached.
Using the language of stochastic-optimal control, tradeoffs like these can
be formally characterized and explain otherwise opaque behavioral decisions.
My presentation will introduce stochastic-optimal control theory, and
show how applying the basic principles offer a powerful framework for
describing and evaluating perceptually-guided action.
Exploring subjective probability
distributions using Bayesian statistics
Bayesian models of
cognition and perception express the expectations of learners and observers
in terms of subjective probability distributions - priors and likelihoods.
This raises an interesting psychological question: if human inferences
adhere to the principles of Bayesian statistics, how can we identify the
subjective probability distributions that guide these inferences? I will
discuss two methods for exploring subjective probability distributions. The
first method is based on evaluating human judgments against distributions
provided by the world. The second substitutes people for elements in
randomized algorithms that are commonly used to generate samples from
probability distributions in Bayesian statistics. I will show how these
methods can be used to gather information about the priors and likelihoods
that seem to characterize human judgments.
Causal inference in multisensory
Perceptual events derive
their significance to an animal from their meaning about the world, that is
from the information they carry about their causes. The brain should thus be
able to efficiently infer the causes underlying our sensory events. Here we
use multisensory cue combination to study causal inference in perception. We
formulate an ideal-observer model that infers whether two sensory cues
originate from the same location and that also estimates their location(s).
This model accurately predicts the nonlinear integration of cues by human
subjects in two auditory-visual localization tasks. The results show that
indeed humans can efficiently infer the causal structure as well as the
location of causes. By combining insights from the study of causal inference
with the ideal-observer approach to sensory cue combination, we show that
the capacity to infer causal structure is not limited to conscious,
high-level cognition; it is also performed continually and effortlessly in
Applying a Bayesian model to a perceptual question
Bayesian models provide
a powerful language for describing and evaluating hypotheses about
perceptual behaviors. When
implemented properly they allow strong conclusions about the brainís
perceptual solutions in determining what caused incoming sensory
constructing a Bayesian model may seem challenging and perhaps ďnot worth
the troubleĒ to those who are not intimately familiar with the practice.
Even with a clear Bayesian model, it is not always obvious how experimental
data should be used to evaluate the modelís parameters.
This presentation will demystify the process by walking through the
modeling and analysis using a simple, relevant example of a perceptual
First I will introduce a
familiar perceptual problem and describe the choices involved in formalizing
it as a Bayesian model. Next, I
will explain how standard experimental data can be exploited to reveal model
parameter values and how the results of multiple experiments may be unified
to fully evaluate the model. The
presentation will be structured as a tutorial that will use Matlab scripts
to simulate the generation of sensory data, the brainís hypothetical
inference procedure, and the quantitative analysis of this hypothesis.
The scripts will be made available beforehand so the audience has the
option of downloading and following along to enhance the hands-on theme.
My goal is that interested audience members will be able to explore
the scripts at a later time to familiarize themselves more thoroughly with a
tractable modeling and analysis process.