Visualizing the Unseen: Perceptographer, an AI Engine for Visualizing Brain-Stimulation-Induced Perceptual Events
Poster Presentation: Saturday, May 17, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Theory
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Elia Shahbazi1 (), Drew Nguyen1, Rasel Ahmed Bhuiyan2, Adam Czajka2, Arash Afraz1; 1National Institutes of Health, 2University of Notre Dame
We recently developed a novel paradigm called perceptography1 to visualize complex perceptual distortions induced by local stimulation of the inferotemporal (IT) cortex. Perceptography uses machine learning to create and optimize specific complex image distortions that were hard for the animal to distinguish from the state of being cortically stimulated. This paradigm opens the door to scientific measurement of subjective perceptual events but comes with a serious image generation challenge. In the absence of a theory linking neuronal activity to visual perception, the perceived visual distortions following brain stimulation may be of any nature. Thus, to avoid biases, our image generation engine that aims to mimic stimulation-induced visual distortions should be able to create any possible distortion in the image. State-of-the-art AI provides two fundamentally different approaches for image generation (for example, face generation): Generative adversarial networks (GAN), which are suitable to create any possible natural face but unable to develop off-manifold distortion to the face, and diffusion models (DM), which could make any image from a given text prompt but have difficulty fine-tuning image/face identities in a continuous space using prompts2,3. We introduce Perceptographer, a novel structure designed to solve this problem. It combines a GAN(StyleGANEX), an autoencoder, and a DM (pix2pix-instructor&LLM) to create a novel, customizable engine to navigate this dense multidimensional space. We invert each GAN-DM output image into a perturbable latent space, enabling the Perceptographer to generate off-manifold distortions, apply various distortion levels, and reconstruct any random point in a continuous feature-space. Perceptographer offers a novel, customizable framework for visualizing brain-stimulation-induced perceptual events in different parts of the visual cortex. This framework overcomes the limitations of current image generation models in handling complex, off-manifold image distortions, providing new opportunities for visualizing and understanding brain-stimulation-induced perceptual phenomena across multiple cortical regions.
Acknowledgements: This research was supported by the Intramural Research Program of the NIMH ZIAMH002958