3:30 pm to 4:30 pm
Newell-Simon Hall 3305
Abstract: Leveraging light in various ways, we can observe and model physical phenomena or states which may not be possible to observe otherwise. In this talk, I will introduce our recent exploration on digital human modeling with different types of light. First, I will present our recent work on the modeling of relightable human heads, hands, and accessories. In particular, we will take a deep dive into our advancement in a capture system as well as learning algorithms that enable the real-time and photorealistic rendering of dynamic humans with global light transport. Then, I will also present our recent work on 3D hair reconstruction with X-rays. Image-based hair reconstruction is an extremely challenging task due to the limited observation of hair interior. To address this, we propose a fully automatic hair reconstruction method by utilizing computed tomography (CT). We show that our approach achieves high-fidelity reconstruction of 3D hair strands for a wide variety of hair styles, which are ready for downstream applications such as rendering and simulation.
Bio: Shunsuke Saito is a Research Scientist at Meta Reality Labs Research in Pittsburgh. He obtained his PhD degree at the University of Southern California. Prior to USC, he was a Visiting Researcher at University of Pennsylvania in 2014. He obtained his BE (2013), ME (2014) in Applied Physics at Waseda University. His research lies in the intersection of computer graphics, computer vision and machine learning, especially centered around digital human, 3D reconstruction, and performance capture. His work has been published in SIGGRAPH, SIGGRAPH Asia, NeurIPS, ECCV, ICCV and CVPR, two of which have been nominated for CVPR Best Paper Award (2019, 2021). His real-time volumetric teleportation work also won Best in Show award in SIGGRAPH 2020 Real-time Live!.
Homepage: https://shunsukesaito.github.io/
Sponsored in part by: Meta Reality Labs Pittsburgh