Title: LightSpeed: Light and Fast Neural Light Fields on Mobile Devices
Abstract:
Real-time novel-view image synthesis on mobile devices is prohibitive due to limited on-device computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computational cost of volumetric rendering. On the other hand, recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices. Neural light field methods learn a direct mapping from a ray representation to the pixel color. The current choice of ray representation is either stratified ray sampling or Plücker coordinates, overlooking the classic light slab (two-plane) representation, the preferred representation to interpolate between light field views.
In this thesis, we find that using the light slab representation is an efficient representation for learning a neural light field. More importantly, it is a lower-dimensional ray representation enabling us to learn the 4D ray space using feature grids which are significantly faster to train and render. Although mostly designed for frontal views, we show that the light-slab representation can be further extended to non-frontal scenes using a divide-and-conquer strategy. Our method offers superior rendering quality compared to previous light field methods and achieves a significantly improved trade-off between rendering quality and speed.
Committee:
Prof. László A. Jeni (advisor)
Prof. Fernando De La Torre Frade
Prof. Shubham Tulsiani
Mosamkumar Dabhi
Zoom Link: https://cmu.zoom.us/j/
Meeting ID: 91220874161
Passcode: 545445