VASC Seminar
Shedding Light on 3D Cameras
Abstract: The advent (and commoditization) of low-cost 3D cameras is revolutionizing many application domains, including robotics, autonomous navigation, human computer interfaces, and recently even consumer devices such as cell-phones. Most modern 3D cameras (e.g., LiDAR) are active; they consist of a light source that emits coded light into the scene, i.e., its intensity is modulated over [...]
Neural Field Representations of Mobile Computational Photography
Abstract: Burst imaging pipelines allow cellphones to compensate for less-than-ideal optical and sensor hardware by computationally merging multiple lower-quality images into a single high-quality output. The main challenge for these pipelines is compensating for pixel motion, estimating how to align and merge measurements across time while the user's natural hand tremor involuntarily shakes the camera. In [...]
Passive Ultra-Wideband Single-Photon Imaging
Abstract: High-speed light sources, fast cameras, and depth sensors have made it possible to image dynamic phenomena occurring in ever smaller time intervals with the help of actively-controlled light sources and synchronization. Unfortunately, while these techniques do capture ultrafast events, they cannot simultaneously capture slower ones too. I will discuss our recent work on passive ultra-wideband [...]
From Understanding to Interacting with the 3D World
Abstract: Understanding the 3D structure of real-world environments is a fundamental challenge in machine perception, critical for applications spanning robotic navigation, content creation, and mixed reality scenarios. In recent years, machine learning has undergone rapid advancements; however, in the 3D domain, such data-driven learning is often very challenging under limited 3D/4D data availability. In this talk, [...]
Learned Imaging Systems
Abstract: Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Of particular interest in recent years has been the development of end-to-end learned “Deep Optics” systems that use differentiable optical simulation in combination with backpropagation to simultaneously learn optical design and deep network post-processing for applications such as hyperspectral [...]
Unlocking Magic: Personalization of Diffusion Models for Novel Applications
Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]
Remote Rendering and 3D Streaming for Resource-Constrained XR Devices
Abstract: An overview of the motivation and challenges for remote rendering and real-time 3D video streaming on XR headsets. Bio: Edward is a third year PhD student in the ECE department interested in computer systems for VR/AR devices. Homepage: https://users.ece.cmu.edu/~elu2/ Sponsored in part by: Meta Reality Labs Pittsburgh
Vectorizing Raster Signals for Spatial Intelligence
Abstract: This seminar will focus on how vectorized representations can be generated from raster signals to enhance spatial intelligence. I will discuss the core methodology behind this transformation, with a focus on applications in AR/VR and robotics. The seminar will also briefly cover follow-up work that explores rigging and re-animating objects from casual single videos [...]