Monday, May 19, 2025, 2:30 – 5:30 pm, Royal Tern
Organizers: William Broderick (Flatiron Institute)
Speakers: William Broderick (Flatiron Institute); Sarah Jo Venditto (Flatiron Institute)
plenoptic is an open source python package for model-based synthesis of perceptual stimuli. It is intended for use by researchers in neuroscience, psychology, and machine learning. The stimuli generated by plenoptic enable interpretation of model properties through features that are enhanced, suppressed, or discarded, and can be used in further experiments to validate or compare models. In addition to the synthesis methods, plenoptic contains a selection of vision science models and is compatible with external models written in pytorch, such as those found in torchvision.
This event is a hands-on tutorial in which participants will learn how to use plenoptic. After a brief introductory presentation, participants will work through a jupyter notebook which explains how to use the package to better understand computational visual models. They will be expected to follow along on their laptops, either running the code locally or using a provided binder instance on the cloud. Participants will learn which sorts of scientific questions can be addressed with plenoptic and how to use it in their own research.
Specifically, participants will be introduced to model metamers[1] and eigendistortions[2], and learn how they can be used to understand and compare a handful of simple visual models (e.g., a linear Gaussian convolutional model, linear center-surround convolutional model, and a simple model of gain control using divisive normalization).
The session will be interactive, with attendees encouraged to ask lots of questions. Attendance is capped at 30 participants.
[1]: e.g., as used in “A parametric texture model based on joint statistics of complex wavelet coefficients”, Portilla and Simoncelli, 2000
[2]: e.g., as used in “Eigen-distortions of hierarchical representations,” Berardino et al., 2017