Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

October

28
Mon
Hyunsung Cho Ph.D. Student Human-Computer Interaction Institute (HCII) , Carnegie Mellon University
Monday, October 28
3:30 pm to 4:30 pm
3305 Newell-Simon Hall
Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality
Abstract:  Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as angular discrimination error and front-back confusion. This decreases the efficiency of XR interfaces because users misidentify from which XR element a sound is coming. To address this, we propose Auptimize, a novel computational approach for placing XR sound sources, which mitigates such localization errors by utilizing the ventriloquist effect. Auptimize disentangles the sound source locations from the visual elements and relocates the sound sources to optimal positions for unambiguous identification of sound cues, avoiding errors due to inter-source proximity and front-back confusion. Our evaluation shows that Auptimize decreases spatial audio-based source identification errors compared to playing sound cues at the paired visual-sound locations. We demonstrate the applicability of Auptimize for diverse spatial audio-based interactive XR scenarios.
 
Bio:  Hyunsung Cho is a fourth-year Ph.D. student in the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University, advised by Prof. David Lindlbauer. Her research focuses on designing, implementing, and evaluating context-aware Extended Reality (XR) interfaces and multimodal interaction techniques in XR to enable seamless, unobtrusive human-computer interactions. Her work combines computational modeling of human perception and behavior, user-centered design, and intelligent systems to create adaptive interfaces for diverse user contexts. Her research has received the Best Paper Awards and Methods Recognition at ACM CSCW and ACM ISS. She holds a M.S. and B.S. in Computer Science from KAIST. She has previously worked as a Research Scientist Intern at Meta’s Reality Labs Research and Nokia Bell Labs’ Pervasive Systems research group.
 
 
 
Sponsored in part by:   Meta Reality Labs Pittsburgh