Neural Volumes: Learning Dynamic Renderable Volumes from Images - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

October

21
Mon
Stephen Lombardi Research Scientist Facebook Reality Labs
Monday, October 21
3:30 pm to 4:00 pm
GHC 6501
Neural Volumes: Learning Dynamic Renderable Volumes from Images

Abstract:   Modeling and rendering of dynamic scenes is challenging, as natural scenes often contain complex phenomena such as thin structures, evolving topology, translucency, scattering, occlusion, and biological motion. Mesh-based reconstruction and tracking often fail in these cases, and other approaches (e.g., light field video) typically rely on constrained viewing conditions, which limit interactivity. We circumvent these difficulties by presenting a learning-based approach to representing dynamic objects inspired by the integral projection model used in tomographic imaging. The approach is supervised directly from 2D images in a multi-view capture setting and does not require explicit reconstruction or tracking of the object. Our method has two primary components: an encoder-decoder network that transforms input images into a 3D volume representation, and a differentiable ray-marching operation that enables end-to-end training. By virtue of its 3D representation, our construction extrapolates better to novel viewpoints compared to screen-space rendering techniques. The encoder-decoder architecture learns a latent representation of a dynamic scene that enables us to produce novel content sequences not seen during training. To overcome memory limitations of voxel-based representations, we learn a dynamic irregular grid structure implemented with a warp field during ray-marching. This structure greatly improves the apparent resolution and reduces grid-like artifacts and jagged motion. Finally, we demonstrate how to incorporate surface-based representations into our volumetric-learning framework for applications where the highest resolution is required, using facial performance capture as a case in point.

Bio:   Stephen Lombardi is a Research Scientist at Facebook Reality Labs. He received his Bachelors in Computer Science from The College of New Jersey in 2009, and his Ph.D. in Computer Science from Drexel University in 2016. His doctoral work aimed to infer physical properties, like the reflectance of objects and the illumination of scenes, from small sets of photographs. He originally joined FRL as a postdoctoral researcher in 2016. At FRL, he created realistic, drivable models of human faces by combining deep generative modeling techniques with 3D morphable models. His current research interests are the unsupervised learning of 3D representations of objects, scenes, and people.