Abstract:
Neural scene representations have transformed the way we model and understand the visual world, enabling stunningly realistic reconstructions from image data. However, these advances often come at a significant computational cost, particularly due to the inefficiencies in volume rendering. In this talk, I’ll present GL-NeRF, a new approach that tackles this challenge from a mathematical perspective.
I’ll also dive into the other side of world modeling: appearance modeling. This involves disentangling geometry, materials, and lighting. With the rise of real-time methods like 3D Gaussian Splatting, we now have powerful tools for this, but many models still rely on physically inaccurate assumptions. I’ll talk about OMG, a principled solution inspired by the radiative transfer theory to introduce physically accurate constraints.
I hope these contributions bring us closer to generalizable neural models that can accurately and efficiently represent both the structure and appearance of complex scenes. Whether you’re interested in rendering or vision, I hope you’ll find something exciting in this talk on neural scene representations.
Committee:
Prof. Katia Sycara (advisor)
Prof. Shubham Tulsiani
Tianyi Zhang