Analyzing the Effectiveness of Neural Radiance Fields for Geometric Modeling of Lunar Terrain
Abstract
The geometric accuracy of digital elevation models built from neural radiance fields (NeRFs) is assessed using stereo pairs collected during a simulated rover traverse under lunar polar lighting conditions by comparison to multi-view stereo reconstruction. While NeRF-based methods are more sensitive to the viewpoints in the training data and produce more artifacts on the edges of the scene, they are capable of producing denser models in occluded regions with limited additional error when the light source is not visible in the cameras. With a visible light source, the NeRF models are incapable of correctly learning scene geometry, though rendered images still appear to be decent. This trend is mitigated somewhat by using depth supervision, though this method elsewhere produces higher amounts of error. Since the volumetric rendering used by NeRF relies on probabilistic reasoning along the ray used to observe the scene, the standard deviation and gradient of the cumulative distribution function can be used as indicators of how sharply a NeRF model resolves a surface and are correlated with height error.
BibTeX
@conference{Hansen-2024-140772,author = {Margaret Hansen and Caleb Adams and Terrence Fong and David Wettergreen},
title = {Analyzing the Effectiveness of Neural Radiance Fields for Geometric Modeling of Lunar Terrain},
booktitle = {Proceedings of IEEE Aerospace Conference},
year = {2024},
month = {May},
keywords = {neural radiance field, NeRF, lunar, crater, graphics, perception, computer vision},
}