Instant Visual 3D Worlds Through Split-Lohmann Displays - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

September

16
Mon
Yingsi Qin PhD Candidate Carnegie Mellon University
Monday, September 16
3:30 pm to 4:30 pm
3305 Newell-Simon Hall
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract:
Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming of 3D content over a large depth range at high spatial resolution, offering an exciting step towards a more immersive real-time 3D experience. We demonstrate the technology’s capabilities through a lab prototype, showcasing high-quality visuals across various static, dynamic, and interactive 3D scenes.
 
Bio:
Yingsi is a PhD candidate in Electrical and Computer Engineering at Carnegie Mellon University, advised by Aswin C. Sakaranarayanan and Matthew P. O’Toole. Her research focuses on designing and building next-generation computational 3D displays for Virtual, Augmented, and Mixed Reality. The interdisciplinary work involves a fusion of computer vision, optics, signal processing, and machine learning. Yingsi received the Best Paper Award at SIGGRAPH 2023 and the Best Demo Award at ICCP 2023.
Yingsi holds a B.S. in Computer Science from Columbia University and a B.A. in Physics from Colgate University. She was a research intern at Meta Reality Labs in the Display Systems Research team (2024) and Snap Research in the Computational Imaging team (2020). She was also a software engineering intern at Google Search (2019).
 
 
Sponsored in part by:   Meta Reality Labs Pittsburgh