2:00 pm to 3:00 pm
Event Location: NSH 3305
Bio: Xiaoming Liu is an Assistant Professor at the Department of Computer Science and Engineering of Michigan State University. He received the Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2004. Before joining MSU in Fall 2012, he was a research scientist at General Electric (GE) Global Research. His research interests include computer vision, patter recognition, biometrics and machine learning. At GE, he performed research, and led a team of researchers on projects sponsored by various government agencies and GE businesses, aiming at developing computer vision algorithms to support facial and body analysis, multi-camera video surveillance systems, and medical imaging. As a co-author, he is a recipient of the Best Paper Honorable Mention Award at IEEE workshop on Biometrics 2009, the Best Student Paper Award at WACV 2012 and 2014. He has authored more than 80 scientific publications with a H-index of 24, and has filed 22 U.S. patents.
Abstract: This talk will present an algorithm for unconstrained 3D face reconstruction. The input to our algorithm is an “unconstrained” collection of face images captured under a diverse variation of poses, expressions, and illuminations, without meta data about cameras or timing. The output of our algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data or texture information. 3D face reconstruction from a collection of unconstrained 2D images is a long-standing computer vision problem. Motivated by the success of the state-of-the-art method, we develop a novel photometric stereo-based method with two distinct novelties. First, working with a true 3D model allows us to enjoy the benefits of using images from all possible poses, including profiles. Second, by leveraging emerging face alignment techniques and our novel normal field-based Laplace editing, a combination of landmark constraints and photometric stereo-based normals drives our surface reconstruction. Given large photo collections and a ground truth 3D surface, we demonstrate the effectiveness and strength of our algorithm both qualitatively and quantitatively.