Vision and Visualization: Inspiring Novel Research Directions in Vision Science

Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 2
Organizer(s): Christie Nothelfer, Northwestern University; Madison Elliott, UBC, Zoya Bylinskii, MIT, Cindy Xiong, Northwestern University, & Danielle Albers Szafir, University of Colorado Boulder
Presenters: Ronald A. Rensink, Aude Oliva, Steven Franconeri, Danielle Albers Szafir

< Back to 2018 Symposia

Symposium Description

Data is ubiquitous in the modern world, and its communication, analysis, and interpretation are critical scientific issues. Visualizations leverage the capabilities of the visual system, allowing us to intuitively explore and generate novel understandings of data in ways that fully-automated approaches cannot. Visualization research builds an empirical framework around design guidelines, perceptual evaluation of design techniques, and a basic understanding of the visual processes associated with viewing data displays. Vision science offers the methodologies and phenomena that can provide foundational insight into these questions. Challenges in visualization map directly to many vision science topics, such as finding data of interest (visual search), estimating data means and variance (ensemble coding), and determining optimal display properties (crowding, salience, color perception). Given the growing interest in psychological work that advances basic knowledge and allows for immediate translation, visualization provides an exciting new context for vision scientists to confirm existing hypotheses and explore new questions. This symposium will illustrate how interdisciplinary work across vision science and visualization simultaneously improves visualization techniques while advancing our understanding of the visual system, and inspire new research opportunities at the intersection of these two fields.

Historically, the crossover between visualization and vision science relied heavily on canonical findings, but this has changed significantly in recent years. Visualization work has recently incorporated and iterated on newer vision research, and the results has been met with great excitement from both sides (e.g., Rensink & Baldridge, 2010; Haroz & Whitney, 2012; Harrison et al., 2014; Borkin et al., 2016; Szafir et al., 2016). Unfortunately, very little of this work is presented regularly at VSS, and there is currently no dedicated venue for collaborative exchanges between the two research communities. This symposium showcases the current state of vision science and visualization research integration, and aspires to make VSS a home for future exchanges. Visualization would benefit from sampling a wider set of vision topics and methods, while vision scientists would gain a new real-world context that simultaneously provokes insight about the visual system and holds translational impact.

This symposium will first introduce the benefits of collaboration between vision science and visualization communities, including the discussion of a specific example: correlation perception (Ronald Rensink). Next, we will discuss the properties of salience in visualizations (Aude Oliva), how we extract patterns, shapes, and relations from data points (Steven Franconeri), and how color perception is affected by the constraints of visualization design (Danielle Albers Szafir). Each talk will be 25 minutes long. The speakers, representing both fields, will demonstrate how studying these topics in visualizations has uniquely advanced our understanding of the visual system, as well as what research in these cross-disciplinary projects looks like, and propose open questions to propel new research in both communities. The symposium will conclude with an open discussion about how vision science and visualization communities can mutually benefit from deeper integration. We expect these topics to be of interest to VSS members from a multitude of vision science topics, specifically: pattern recognition, salience, shape perception, color perception, and ensemble coding.

Presentations

Information Visualization and the Study of Visual Perception

Speaker: Ronald A. Rensink, Departments of Psychology and Computer Science, UBC

Information visualization and vision science can interact in three different (but compatible) ways. The first uses knowledge of human vision to design more effective visualizations. The second adapts measurement techniques originally developed for experiments to assess performance on given visualizations. And a third way has also been recently proposed: the study of restricted versions of existing visualizations. These can be considered as “fruit flies”, i.e., systems that exist in the real world, but are still simple enough to study. This approach can help us discover why a visualization works, and can give us new insights into visual perception as well. An example of this is the perception of Pearson correlation in scatterplots. Performance here can be described by two linked laws: a linear one for discrimination and a logarithmic one for perceived magnitude (Rensink & Baldridge, 2010). These laws hold under a variety of conditions, including when properties other than spatial position are used to convey information (Rensink, 2014). Such behavior suggests that observers can infer probability distributions in an abstract two-dimensional parameter space (likely via ensemble coding), and can use these to estimate entropy (Rensink, 2017). These results show that interesting aspects of visual perception can be discovered using restricted versions of real visualization systems. It is argued that the perception of correlation in scatterplots is far from unique in this regard; a considerable number of these “fruit flies” exist, many of which are likely to cast new light on the intelligence of visual perception.

Where do people look on data visualizations?

Speaker: Aude Oliva, Massachusetts Institute of Technology
Additional Authors: Zoya Bylinskii, MIT

What guides a viewer’s attention when she catches a glimpse of a data visualization? What happens when the viewer studies the visualization more carefully, to complete a cognitively-demanding task? In this talk, I will discuss the limitations of computational saliency models for predicting eye fixations on data visualizations (Bylinskii et al., 2017). I will present perception and cognition experiments to measure where people look in visualizations during encoding to, and retrieval from, memory (Borkin, Bylinskii, et al., 2016). Motivated by clues that eye fixations give about higher-level cognitive processes like memory, we sought a way to crowdsource attention patterns at scale. I will introduce BubbleView, our mouse-contingent interface to approximate eye tracking (Kim, Bylinskii, et al., 2017). BubbleView presents participants with blurred visualizations and allows them to click to expose “bubble” regions at full resolution. We show that up to 90% of eye fixations on data visualizations can be accounted for by the BubbleView clicks of online participants completing a description task. Armed with a tool to efficiently and cheaply collect attention patterns on images, which we call “image importance” to distinguish from “saliency”, we collected BubbleView clicks for thousands of visualizations and graphic designs to train computational models (Bylinskii et al., 2017). Our models run in real-time to predict image importance on new images. This talk will demonstrate that our models of attention for natural images do not transfer to data visualizations, and that using data visualizations as stimuli for perception studies can open up fruitful new research directions.

Segmentation, structure, and shape perception in data visualizations

Speaker: Steven Franconeri, Northwestern University

The human visual system evolved and develops to perceive scenes, faces, and objects in the natural world, and this is where vision scientists justly focus their research. But humans have adapted that system to process artificial worlds on paper and screens, including data visualizations. I’ll demonstrate two examples of how studying the visual system within such worlds can provide vital cross-pollination for our basic research. First a complex line or bar graph can be alternatively powerful, or vexing, for students and scientists. What is the suite of our available tools for extracting the patterns within it? Our existing research is a great start: I’ll show how the commonly encountered ‘magical number 4’ (Choo & Franconeri, 2013) limits processing capacity, and how the literature on shape silhouette perception could predict how we segment them. But even more questions are raised: what is our internal representation of the ‘shape’ of data – what types of changes to the data can we notice, and what changes would leave us blind? Second, artificial displays require that we recognize relationships among objects (Lovett & Franconeri, 2017), as when you quickly extract two main effects and an interaction from a 2×2 bar graph. We can begin to explain these feats through multifocal attention or ensemble processing, but soon fall short. I will show how these real-world tasks inspire new research on relational perception, highlighting eyetracking work that reveals multiple visual tools for extracting relations based on global shape vs. contrasts between separate objects.

Color Perception in Data Visualizations

Speaker: Danielle Albers Szafir, University of Colorado Boulder

Many data visualizations use color to convey values. These visualizations commonly rely on vision science research to match important properties of data to colors, ensuring that people can, for example, identify differences between values, select data subsets, or match values against a legend. Applying vision research to color mappings also creates new questions for vision science. In this talk, I will discuss several studies that address knowledge gaps in color perception raised through visualization, focusing on color appearance, lightness constancy, and ensemble coding. First, conventional color appearance models assume colors are applied to 2° or 10° uniformly-shaped patches; however, visualizations map colors to small shapes (often less than 0.5°) that vary in their size and geometry (e.g., bar graphs, line charts, or maps), degrading difference perceptions inversely with a shape’s geometric properties (Szafir, 2018). Second, many 3D visualizations embed data along surfaces where shadows may obscure data, requiring lightness constancy to accurately resolve values. Synthetic rendering techniques used to improve interaction or emphasize aspects of surface structure manipulate constancy, influencing people’s abilities to interpret shadowed colors (Szafir, Sarikaya, & Gleicher, 2016). Finally, visualizations frequently require ensemble coding of large collections of values (Szafir et al., 2016). Accuracy differences between different visualizations for value identification (e.g., extrema) and summary tasks (e.g., mean) suggest differences in ensemble processing for color and position (Albers, Correll, & Gleicher, 2014). I will close by discussing open challenges for color perception arising from visualization design, use, and interpretation.

< Back to 2018 Symposia

2018 Symposia

Clinical insights into basic visual processes

Organizer(s): Paul Gamlin, University of Alabama at Birmingham; Ann E. Elsner, Indiana University; Ronald Gregg, University of Louisville
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 1

This year’s biennial ARVO at VSS symposium features insights into human visual processing at the retinal and cortical level arising from clinical and translational research. The speakers will present recent work based on a wide range of state-of-the art techniques including adaptive optics, brain and retinal imaging, psychophysics and gene therapy. More…

Vision and Visualization: Inspiring novel research directions in vision science

Organizer(s): Christie Nothelfer, Northwestern University; Madison Elliott, UBC, Zoya Bylinskii, MIT, Cindy Xiong, Northwestern University, & Danielle Albers Szafir, University of Colorado Boulder
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 2

Visualization research seeks design guidelines for efficient visual displays of data. Vision science topics, such as pattern recognition, salience, shape perception, and color perception, all map directly to challenges encountered in visualization, raising new vision science questions and creating a space ripe for collaboration. Four speakers representing both vision science and visualization will discuss recent cross-disciplinary research, closing with a panel to discuss about how vision science and visualization communities can mutually benefit from deeper integration. This symposium will demonstrate that contextualizing vision science research in visualization can expose novel gaps in our knowledge of how perception and attention work. More…

Prediction in perception and action

Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios. More…

When seeing becomes knowing: Memory in the form perception pathway

Organizer(s): Caitlin Mullin, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of the Technology
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 2

The established view of perception and memory is that they are dissociable processes that recruit distinct brain structures, with visual perception focused on the ventral visual stream and memory subserved by independent deep structures in the medial temporal lobe. Recent work in cognitive neuroscience has challenged this traditional view by demonstrating interactions and dependencies between perception and memory at nearly every stage of the visual hierarchy. In this symposium, we will present a series of cutting edge studies that showcase cross-methodological approaches to describe how visual perception and memory interact as part of a shared, bidirectional, interactive network. More…

Visual remapping: From behavior to neurons through computation

Organizer(s): James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT & Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 1

In this symposium we will discuss the neural substrates responsible for maintaining stable visual and attentional representations during active vision. Speakers from three complementary experimental disciplines, psychophysics, neurophysiology and computational modeling, will discuss recent advances in clarifying the role of spatial receptive field “remapping” in stablizing sensory representations across saccadic eye movements. Participants will address new experimental and theoretical methods for characterizing statiotemporal dynamics of visual and attentional remapping, both behavioral and physiological, during active vision and relate these data to recent computational efforts towards modeling oculomotor and visual system interactions. More…

Advances in temporal models of human visual cortex

Organizer(s): Jonathan Winawer, Department of Psychology and Center for Neural Science, New York University. New York, NY
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 2

How do multiple areas in the human visual cortex encode information distributed over time? We focus on recent advances in modeling the temporal dynamics in the human brain: First, cortical areas have been found to be organized in a temporal hierarchy, with increasingly long temporal windows from earlier to later visual areas. Second, responses in multiple areas can be accurately predicted with temporal population receptive field models. Third, quantitative models have been developed to predict how responses in different visual areas are affected by both the timing and content of the stimulus history (adaptation). More…

2018 Keynote – Kenneth C. Catania

Kenneth C. Catania

Stevenson Professor of Biological Sciences
Vanderbilt University
Department of Biological Sciences

More than meets the eye: the extraordinary brains and behaviors of specialized predators.

Saturday, May 19, 2018, 7:15 pm, Talk Room 1-2

Predator-prey interactions are high stakes for both participants and have resulted in the evolution of high-acuity senses and dramatic attack and escape behaviors.  I will describe the neurobiology and behavior of some extreme predators, including star-nosed moles, tentacled snakes, and electric eels.  Each species has evolved special senses and each provides unique perspectives on the evolution of brains and behavior.

Biography

A neuroscientist by training, Ken Catania has spent much of his career investigating the unusual brains and behaviors of specialized animals.  These have included star-nosed moles, tentacled snakes, water shrews, alligators, crocodiles, and most recently electric eels. His studies often focus on predators that have evolved special senses and weapons to find and overcome elusive prey.  He is considered an expert in extreme animal behaviors and studies specialized species to reveal general principles about brain organization and sensory systems. Catania was named a MacArthur Fellow in 2006, a Guggenheim Fellow in 2014, and in 2013 he received the Pradel Research Award in Neurosciences from the National Academy of Sciences.  Catania received a BS in zoology from the University of Maryland (1989), a Ph.D. (1994) in neurosciences from the University of California, San Diego, and is currently a Stevenson Professor of Biological Sciences at Vanderbilt University.

Vision Sciences Society