Flow parsing as a causal source separation process allows for fast and parallel object and self-motion estimation
Poster Presentation: Saturday, May 17, 2025, 2:45 – 6:45 pm, Pavilion
Session: Motion: Biological, self-motion
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Malte Scherff1, Markus Lappe1; 1University of Münster
When moving through a static environment, optic flow is a reliable source of information about one's movement. The presence of additional sources of motion in the scene locally confounds the global pattern due to observer motion. Nonetheless, even without extraretinal information, self-motion direction can be estimated, albeit biased, and independently moving objects can be detected and their scene-relative movement assessed, which allows one to successfully interact with or avoid moving objects while on the go. The mechanism proposed to enable this is called flow parsing. The retinal motion might be unraveled into components caused by different sources by using the brain's sensitivity to flow patterns due to self-motion. However, the computational mechanism behind this process is still unclear. We present a computational model that implements flow parsing based on a mid-level representation of the consistency of global flow patterns with self-motion parameters. This approach allows for fast, parallel, and direct estimation of self-motion and object parameters while reproducing key findings in human behavior. Object motion biases the perception of heading in different directions depending on the motion-in-depth of the object. The bias magnitude also strongly depends on the object's speed, as slow and fast object speeds cause smaller biases than intermediate ones. The assessment of scene-relative object motion is possible solely from relative motion and does not rely on the prior estimation of heading.