Coding visuo-spatial information in a vibrotactile belt: Perceived egocentric direction from patterns of vibration

Poster Presentation: Sunday, May 18, 2025, 8:30 am – 12:30 pm, Pavilion
Session: Action: Perception and recognition

Sina Feldmann1, Qiwu Zhang2, Chiang-Heng Chien2, Brian Free1, Benjamin B. Kimia2, William H. Warren1; 1Department of Cognitive & Psychological Sciences, Brown University, 2School of Engineering, Brown University

How might we design more useful assistive devices for individuals with impaired vision? Although traditional tools such as the long cane are simple and effective for short-range guidance (2-3 steps), there are few practical aids for mid-range navigation and collision avoidance over intermediate distances. We are investigating vibrotactile belts as a potential sensory substitution interface, and are currently comparing alternative methods of encoding visuo-spatial information in patterns of vibration. In the present experiment, we tested perceived egocentric direction from single-tactor and distributed (Gaussian) patterns of vibration. Participants (N=16) wore a vibrotactile belt containing 16 pager motors spaced 22.5˚ apart, with an Arduino controller. Eight directions at 45-degree intervals were stimulated for 2 seconds at a fixed intensity (225 Hz), and participants indicated the perceived direction of vibration by clicking on a circle surrounding an icon of a person on a computer screen. Three vibration patterns were compared: (a) a single motor, (b) 3 adjacent motors in a narrow Gaussian distribution (spanning 45˚), (c) 3 or (d) 5 motors in a wide Gaussian distribution (spanning 90˚). Each stimulus was repeated ten times, randomized within each vibration block. The results show that the mean reported direction was linear and highly accurate (mean constant error = 0.35°). The mean variable error depended on direction, with the smallest within-subject SD in the anterior and posterior directions (mean SD = 6.22°) and the largest in the lateral directions (mean SD = 13.77°) (see also Cholewiak & Schwab, 2004). Furthermore, responses were unaffected by the vibration pattern. We conclude that single-tactor vibrations are sufficient to specify egocentric direction quite accurately, simplifying the encoding and reducing the controller computation. Next, we plan to evaluate different encoding strategies to guide walking to a series of spatial targets.

Acknowledgements: This research is supported by NIH R01EY029745.