High-Fidelity Neural Radiance Fields - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

October

14
Mon
Christian Richardt Research Scientist Lead Meta Reality Labs Research
Monday, October 14
3:30 pm to 4:30 pm
3305 Newell-Simon Hall
High-Fidelity Neural Radiance Fields

Abstract:

I will present three recent projects that focus on high-fidelity neural radiance fields for walkable VR spaces:

VR-NeRF (SIGGRAPH Asia 2023) is an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. To represent highly detailed scenes, we introduce a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing. Our multi-GPU renderer enables high-fidelity volume rendering at the full VR resolution of dual 2K×2K at 36 Hz on our custom demo machine.

HybridNeRF (CVPR 2024 Highlight) leverages the strengths of NeRF-style volumetric rendering and SDF-style surface representations by rendering most objects as surfaces while modeling the (typically) small fraction of challenging regions volumetrically. We evaluate HybridNeRF against the challenging Eyeful Tower dataset along with other commonly used view synthesis datasets. When compared to state-of-the-art baselines, including recent rasterization-based approaches, HybridNeRF improves error rates by 15–30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2K×2K).

SpecNeRF (CVPR 2024 Highlight) proposes a learnable Gaussian directional encoding to better model view-dependent effects under near-field lighting conditions. Importantly, our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps. As a result, it enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients. We further introduce a data-driven geometry prior that helps alleviate the shape radiance ambiguity in reflection modeling.

 

Bio:

Christian Richardt is a Research Scientist at Meta Reality Labs Research in Pittsburgh, PA. His research combines insights from vision, graphics and perception to reconstruct visual information from images and videos, to create high-quality visual experiences with a focus on VR experiences. Christian was previously an Associate Professor and EPSRC-UKRI Innovation Fellow in the Visual Computing Group and the CAMERA Centre at the University of Bath, UK. Before that, he was a postdoc at the Intel Visual Computing Institute at Saarland University and Max-Planck-Institut für Informatik in Saarbrücken, Germany. Previously, he was a postdoc in the REVES team at Inria Sophia Antipolis, France. Christian graduated with a PhD and BA from the University of Cambridge in 2012 and 2007, respectively. His doctoral research investigated the full life cycle of RGBD videos: from their acquisition, via filtering and processing, to the evaluation of stereoscopic display.

 

Homepage:  https://richardt.name

 

Sponsored in part by:   Meta Reality Labs Pittsburgh