S2 - Models of Perceptual Learning: Combining Psychophysics, Computation and Neuroscience
Friday, May 6, 12:00 - 2:00 pm, Royal Ballroom 4-5
Organizer: Alexander A. Petrov, Department of Psychology, Ohio State University
Presenters: Zhong-Lin Lu, Department of Psychology, University of Southern California; Alexander A. Petrov, Department of Psychology, Ohio State University; Joshua Gold, Department of Neuroscience, University of Pennsylvania; Peggy Series, Institute for Adaptive and Neural Computation, University of Edinburgh; Dov Sagi, The Weizmann Institute of Science, Israel
Perceptual learning refers to improvements in perceptual abilities through training. It has been a topic of growing interest over the last two decades. Perceptual learning is a valuable tool for studying the organization of the visual system and the mechanisms of brain plasticity. It also has a great potential for practical applications such as training of visual experts and rehabilitation of persons with disabilities. These challenges are complex and require an integrated, multidisciplinary approach. There is a wealth of behavioral data documenting the occurrence, speed, specificity, and other properties of perceptual learning under various conditions. There is also a growing stream of human neuroimaging and animal neurophysiological data. What continues to elude the field, however, is an integrated theoretical understanding of these disparate findings. Computational and mathematical modeling is an important tool in this regard. Models help us formulate explicit and consistent principles and mechanisms, generate novel predictions, and bridge the explanatory gap between brain and behavior. A number of models of perceptual learning with increasing scope and sophistication have been developed in recent years.
This symposium brings together an international panel of experts in perceptual learning, with particular emphasis on computational and/or formal approaches. These speakers have made important contributions to the field of perceptual learning using a mixture of psychophysical, computational, and neuroscientific approaches. Here they will each present computational models of perceptual learning that advance our understanding of the underlying brain mechanisms. Zhong-Lin Lu will start with a broad overview of the functions and mechanisms. Alex Petrov will explore one particular mechanism -- selective reweighting -- in some detail. Joshua Gold will present a novel analytical model of population coding that allows us to quantify how various changes in neuronal firing rates can affect perceptual performance. Peggy Seriès will present a reweighting account for patterns of disruption and transfer of perceptual learning for visual hyperacuity. Finally, Dov Sagi will discuss some unexpected consequences of the hypothesis that perceptual learning involves statistical modeling of the task at hand.
The symposium is designed to serve both as a tutorial of established ideas and techniques and as a venue to introduce new advances at the cutting edge of this active research area. Perceptual learning is a field of investigation that impacts all aspects of vision and thus this symposium will interest VSS attendees across disciplines and at all levels, from students to experts. An earlier symposium on perceptual learning attracted an audience beyond room capacity at VSS 2006. The current proposal builds on this success by adding an emphasis on modeling and reporting the exciting new developments in the intervening years.
Functions and Mechanisms of Perceptual Learning
Zhong-Lin Lu, Department of Psychology, University of Southern California
Perceptual learning -- the improvement of performance through practice or training -- has been observed over a wide range of perceptual tasks in adult humans. The high degree of plasticity of the adult perceptual systems suggests that perception and perceptual learning cannot be studied separately. In this talk, we will review some major functions and mechanisms of perceptual learning, including specificity of perceptual leaning, the law of practice in perceptual learning, mechanisms of perceptual learning, the level and mode of perceptual learning, optimal training procedures, and computational models of perceptual learning. Studies of these various aspects of perceptual learning have greatly enhanced our understanding the information processing limitations of the human observer, and how the state of the observer changes with training, with strong implications for the development of potential noninvasive training methods for perceptual expertise in normal populations and for the amelioration of deficits in challenged populations.
A Selective-Reweighting Model of Perceptual Learning
Alexander A. Petrov, Department of Psychology, Ohio State University
Growing evidence suggests that selective reweighting of the read-out connections from the sensory representations plays a major role in perceptual learning. Here we instantiate this idea in a computational model that takes grayscale images as inputs and learns on a trial-by-trial basis. The model develops the multi-channel perceptual template model (PTM, Dosher & Lu, 1998, PNAS) and extends it with a biologically plausible learning rule. The stimuli are processed by standard orientation- and frequency-tuned representational units, divisively normalized. Learning occurs only in the read-out connections to a decision unit; the stimulus representations never change. An incremental Hebbian rule tracks the task-dependent predictive value of each unit, thereby improving the signal-to-noise ratio of their weighted combination. Each abrupt change in the environmental statistics induces a switch cost in the learning curves as the system temporarily works with suboptimal weights. In this situation, self-generated feedback seems sufficient for learning. The model accounts for a complex pattern of context-induced switch costs in a non-stationary training environment.
A recent study (Petrov & Hayes, 2010, JOV) found a strongly asymmetric pattern of transfer of learning between first- and second-order motion. Second-order training transferred fully to first-order test, whereas first-order training did not transfer significantly to second-order. This strong asymmetry challenges the simple reweighting model but is compatible with an augmented version in which the Fourier and non-Fourier processing channels are integrated by taking the maximum of the carrier-specific signals within a given direction of motion.
A neural-coding theory of perceptual learning-related plasticity
Joshua Gold, Department of Neuroscience, University of Pennsylvania; Ching-Ling Teng, University of Virginia, Chi-Tat Law, Stanford University
A striking feature of perceptual learning is the diversity of neural mechanisms that have been implicated in different studies. For example, some forms of perceptual learning appear to involve changes in how sensory information is represented in early sensory areas of the brain. In contrast, other forms appear to involve improved read-out of information from unchanged sensory representations. Little is known about the principles that govern when these different forms of plasticity occur. Here we propose and test the theory that these different forms of plasticity represent the most effective ways to optimize task performance under different conditions. We test this idea using a novel analytical model of population coding that allows us to quantify how various changes in properties of a sensory representation and its readout can affect perceptual performance. The results indicate that diverse neural mechanisms of perceptual learning can reflect common principles of task optimization.
Disruption and Transfer of Perceptual Learning for Visual Hyperacuity
Peggy Series, Institute for Adaptive and Neural Computation, University of Edinburgh; Grigorios Sotiropoulos, University of Edinburgh; Aaron Seitz, University of California at Riverside
Improvements of visual hyperacuity are a key focus in research of Perceptual Learning. Of particular interest has been the specificity of visual hyperacuity learning to the particular features of the trained stimuli as well as disruption of learning that occurs in some cases when different stimulus features are trained together. The implications of these phenomena on the underlying learning mechanisms are still open to debate; however, there is a marked absence of computational models that explore these phenomena in a unified way. Here we present a computational learning model based on reweighting and extend it to enable direct comparison, by means of simulations, with a variety of psychophysical data. We find that this very simple model can account for several findings, such as disruption of learning of one task by practice on a similar task, as well as transfer of learning across both tasks and stimulus configurations under certain conditions. These simulations help explain existing results in the literature as well as provide important insights and predictions regarding the reliability of different hyperacuity tasks and stimuli.
Perceptual learning viewed as a statistical modeling process -- Is it all overfitting?
Dov Sagi, The Weizmann Institute of Science, Israel; Hila Harris, The Weizmann Institute of Science, Israel
Performance gains obtained through perceptual learning are, surprisingly, specific to the trained condition. Recent research shows that specificity increases with training and with task precision (Jeter et al. 2009/10), and generalizes across tasks and features trained in temporal proximity (Yu and colleagues). Such results are expected if perceptual learning involves statistical modeling of the task at hand, with variations in brain anatomy (Mollon & Danilova, 1996), or neuronal response, limiting the reliability of the fitted data. When training is carried out with a limited set of stimuli (e.g. a single contrast), overfitting may gradually arise, thus predicting failures when new conditions are presented. In the contrast domain, learning is specific to the trained contrast, and much reduced when different contrasts are mixed during training (Adini et al. 2004; Yu et al., 2004), demonstrating that learning is nothing but overfitting. Overfitting may arise when learning involves the readout of sensory neurons (Lu & Dosher), reweighting responses according to the peculiarities of the trained condition. To test the generality of this theoretical approach, we re-examined the specificity of learning to retinal location. Using the texture discrimination task (Censor & Sagi, 2009), we had observers practicing a target positioned either at a fixed location (the traditional way) or at one of two locations. Against overfitting, we find equal learning in both conditions, but most surprisingly, in agreement with overfitting, while the 1-location training was specific as expected, the 2-locations training completely transferred to locations previously untrained, nor tested. Theoretical implications will be presented.