Seeing invisible material from visible image features
Poster Presentation: Saturday, May 17, 2025, 2:45 – 6:45 pm, Pavilion
Session: Color, Light and Materials: Surfaces and materials
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Masataka Sawayama1; 1The University of Tokyo, Japan
Humans are generally thought to perceive only the range of real-world information that their sensory systems can detect. For instance, visual sensory receptors are sensitive only to a limited range of light wavelengths. Similarly, spatial patterns beyond the resolution limits of visual acuity cannot be directly resolved. However, in natural environments, information beyond the sensory resolution, i.e., “invisible” information, does not exist in isolation from detectable sensory signals. Instead, they exhibit statistically co-varying characteristics, and computer graphics techniques, such as microfacet reflection models, intensively utilize these statistical relationships to render realistic materials. This observation suggests that humans might also leverage the relationships between detectable sensory signals and invisible properties to perceive such information in natural environments. This study aims to identify visible image features that co-vary with invisible properties using self-supervised learning and asks to what extent humans can estimate invisible information from these features. First, we trained two encoders with contrastive objectives: one encoder processed sensory signals, while the other took as input “invisible” super-resolution signals defined by spatial frequencies in images. A shared latent representation between the two signal types was obtained using a large-scale image dataset through self-supervised learning. Using a neural style-transfer method, this shared representation was used to create style-transferred sensory signals that inherit styles of “invisible” signals. Next, we conducted psychophysical experiments to investigate whether humans could infer invisible styles from these transferred sensory signals. In these experiments, participants performed discrimination tasks to detect the target invisible styles in the non-target invisible styles. Results showed that participants correctly detected the target styles transferred onto sensory signals. Furthermore, when participants observed invisible signals with a resolvable distance after the discrimination task, they could detect the previously “unseen” target styles. These findings suggest that humans infer invisible information from sensory signals co-varying invisible properties.