PhD Thesis Defense
Carnegie Mellon University
System Identification and Control of Multiagent Systems Through Interactions
Abstract: This thesis investigates the problem of inferring the underlying dynamic model of individual agents of a multiagent system (MAS) and using these models to shape the MAS's behavior using robots extrinsic to the MAS. We investigate (a) how an observer can infer the latent task and inter-agent interaction constraints from the agents' motion and [...]
Carnegie Mellon University
Parallelized Search on Graphs with Expensive-to-Compute Edges
Abstract: Search-based planning algorithms enable robots to come up with well-reasoned long-horizon plans to achieve a given task objective. They formulate the problem as a shortest path problem on a graph embedded in the state space of the domain. Much research has been dedicated to achieving greater planning speeds to enable robots to respond quickly [...]
Carnegie Mellon University
Visual Dataset Pipeline: From Curation to Long-Tail Learning
Abstract: Computer vision models have proven to be tremendously capable of recognizing and detecting several real-world objects: cars, people, pets. These models are only possible due to a meticulous pipeline where a task and application is first conceived followed by an appropriate dataset curation that collects and labels all necessary data. Commonly, studies are focused [...]
Carnegie Mellon University
Optimization of Small Unmanned Ground Vehicle Design using Reconfigurability, Mobility, and Complexity
Abstract: Unmanned ground vehicles are being deployed in increasingly diverse and complex environments. With modern developments in sensing and planning, the field of ground vehicle mobility presents rich possibilities for mechanical innovations that may be especially relevant for unmanned systems. In particular, reconfigurability may enable vehicles to traverse a wider set of terrains with greater [...]
Carnegie Mellon University
Towards Reconstructing Non-rigidity from Single Camera
Abstract: In this talk we will discuss how to infer 3D from images captured by a single camera, without assuming the target scenes / objects being static. The non-static setting makes our problem ill-posed and challenging to solve, but is vital in practical applications where target-of-interest is non-static. To solve ill-posed problems, the current trend [...]
Large Scale Dense 3D Reconstruction via Sparse Representations
Abstract: Dense 3D scene reconstruction is in high demand today for view synthesis, navigation, and autonomous driving. A practical reconstruction system inputs multi-view scans of the target using RGB-D cameras, LiDARs, or monocular cameras, computes sensor poses, and outputs scene reconstructions. These algorithms are computationally expensive and memory-intensive due to the presence of 3D data. [...]
From Reinforcement Learning to Robot Learning: Leveraging Prior Data and Shared Evaluation
Abstract: Unlike most machine learning applications, robotics involves physical constraints that make off-the-shelf learning challenging. Difficulties in large-scale data collection and training present a major roadblock to applying today’s data-intensive algorithms. Robot learning has an additional roadblock in evaluation: every physical space is different, making results across labs inconsistent. Two common assumptions of the robot [...]
Building 4D Models of Objects and Scenes from Monocular Videos
Abstract: We explore how to infer the time-varying 3D structures of generic, deformable objects, and dynamic scenes from monocular videos. A solution to this problem is essential for virtual reality and robotics applications. However, inferring 4D structures given 2D observations is challenging due to its under-constrained nature. In a casual setup where there is neither [...]
Learning via Visual-Tactile Interaction
Abstract: Humans learn by interacting with their surroundings using all of their senses. The first of these senses to develop is touch, and it is the first way that young humans explore their environment, learn about objects, and tune their cost functions (via pain or treats). Yet, robots are often denied this highly informative and [...]
Redefining the Perception-Action Interface: Visual Action Representations for Contact-Centric Manipulation
Abstract: In robotics, understanding the link between perception and action is pivotal. Typically, perception systems process sensory data into state representations like segmentations and bounding boxes, which a planner uses to plan actions. However, this state estimation approach can fail in environments with partial observability, and in cases with challenging object properties like transparency and deformability. [...]