PhD Thesis Defense
Carnegie Mellon University
Towards Reconstructing Non-rigidity from Single Camera
Abstract: In this talk we will discuss how to infer 3D from images captured by a single camera, without assuming the target scenes / objects being static. The non-static setting makes our problem ill-posed and challenging to solve, but is vital in practical applications where target-of-interest is non-static. To solve ill-posed problems, the current trend [...]
Large Scale Dense 3D Reconstruction via Sparse Representations
Abstract: Dense 3D scene reconstruction is in high demand today for view synthesis, navigation, and autonomous driving. A practical reconstruction system inputs multi-view scans of the target using RGB-D cameras, LiDARs, or monocular cameras, computes sensor poses, and outputs scene reconstructions. These algorithms are computationally expensive and memory-intensive due to the presence of 3D data. [...]
From Reinforcement Learning to Robot Learning: Leveraging Prior Data and Shared Evaluation
Abstract: Unlike most machine learning applications, robotics involves physical constraints that make off-the-shelf learning challenging. Difficulties in large-scale data collection and training present a major roadblock to applying today’s data-intensive algorithms. Robot learning has an additional roadblock in evaluation: every physical space is different, making results across labs inconsistent. Two common assumptions of the robot [...]
Building 4D Models of Objects and Scenes from Monocular Videos
Abstract: We explore how to infer the time-varying 3D structures of generic, deformable objects, and dynamic scenes from monocular videos. A solution to this problem is essential for virtual reality and robotics applications. However, inferring 4D structures given 2D observations is challenging due to its under-constrained nature. In a casual setup where there is neither [...]
Learning via Visual-Tactile Interaction
Abstract: Humans learn by interacting with their surroundings using all of their senses. The first of these senses to develop is touch, and it is the first way that young humans explore their environment, learn about objects, and tune their cost functions (via pain or treats). Yet, robots are often denied this highly informative and [...]
Redefining the Perception-Action Interface: Visual Action Representations for Contact-Centric Manipulation
Abstract: In robotics, understanding the link between perception and action is pivotal. Typically, perception systems process sensory data into state representations like segmentations and bounding boxes, which a planner uses to plan actions. However, this state estimation approach can fail in environments with partial observability, and in cases with challenging object properties like transparency and deformability. [...]
Multi-Human 3D Reconstruction from Monocular Videos
Abstract: We study the problem of multi-human 3D reconstruction from videos captured in the wild. Human movements are dynamic, and accurately reconstructing them in various settings is crucial for developing immersive social telepresence, assistive humanoid robots, and augmented reality systems. However, creating such a system requires addressing fundamental issues with previous works regarding the data [...]
How I Learned to Love Blobs: The Power of Gaussian Representations in Differentiable Rendering and Optimization
Abstract: In this thesis, we explore the use of Gaussian Representations in multiple application areas of computer vision and robotics. In particular, we design a ray-based differentiable renderer for 3D Gaussians that can be used to solve multiple classic computer vision problems in a unified manner. For example, we can reconstruct 3D shapes from color, [...]
Towards Photorealistic Dynamic Capture and Animation of Human Hair and Head
Abstract: Realistic human avatars play a key role in immersive virtual telepresence. To reach a high level of realism, a human avatar needs to faithfully reflect human appearance. A human avatar should also be drivable and express natural motions. Existing works have made significant progress in building drivable realistic face avatars, but they rarely include [...]
Modeling Dynamic Clothing for Data-Driven Photorealistic Avatars
Abstract: In this thesis, we aim to build photorealistic animatable avatars of humans wearing complex clothing in a data-driven manner. Such avatars will be a critical technology to enable future applications such as immersive telepresence in Virtual Reality (VR) and Augmented Reality (AR). Existing full-body avatars that jointly model geometry and view-dependent texture using Variational [...]