How sharp is the (average) retinal image?
Poster Presentation: Monday, May 19, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Color, Light and Materials: Optics, models
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Charlie S. Burlingham1 (), Ian M. Erkelens2, Oliver S. Cossairt1, Phillip Guan1; 1Reality Labs Research, Meta, 2Reality Labs, Meta
Most models of visual perception assume a fully-in-focus retinal image, but nearly the entire retinal image is blurred by optical aberrations. The magnitude of this optical blur in everyday life and its variation across the retina are unclear. Some off-axis aberrations (e.g., astigmatism, coma, spherical) are well-characterized and thought to be relatively invariant to scene depth. On the other hand, defocus aberration depends strongly on scene depth, focal distance, and pupil size — all of which can vary substantially across environments and tasks. Sprague et al. (2016) previously used mobile eye tracking and a geometric model to estimate the natural statistics of defocus blur in human observers performing everyday tasks. However, their analysis only examined the central 20º due to camera field-of-view and eye box limits. In the farther periphery, where other off-axis aberrations may be larger, the natural statistics of blur are unknown. In this work we aim to estimate these statistics using a recently developed set of wide-field (80º x 50º) eye models (Hastings et al., 2024) in realistically modeled indoor and outdoor 3D environments. We use Blender and Zemax to simulate a fixating, accommodating observer, whose fixation depths statistically match those of human observers measured in nine everyday tasks (Burlingham et al., 2024). Using this platform, we model the average blur field across the retina, compare this with previous estimates in the central 20º, and examine variability in blur magnitudes arising from differences in environment, focal distance, pupil size, and individual refractive errors. Our simulations of optical aberrations in everyday life can help identify optical bottlenecks on visual perception, and guide the development of novel displays and rendering methods.