PhD Speaking Qualifier
A Multi-view Synthetic and Real-world Human Activity Recognition Dataset
Abstract: Advancements in Human Activity Recognition (HAR) partially relies on the creation of datasets that cover a broad range of activities under various conditions. Unfortunately, obtaining and labeling datasets containing human activity is complex, laborious, and costly. One way to mitigate these difficulties with sufficient generality to provide robust activity recognition on unseen data is [...]
Dense 3D Representation Learning for Geometric Reasoning in Manipulation Tasks
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]
Learning novel objects during robot exploration via human-informed few-shot detection
Abstract: Autonomous mobile robots exploring in unfamiliar environments often need to detect target objects during exploration. Most prevalent approach is to use conventional object detection models, by training the object detector on large abundant image-annotation dataset, with a fixed and predefined categories of objects, and in advance of robot deployment. However, it lacks the capability [...]
Continually Improving Robots
Abstract: General purpose robots should be able to perform arbitrary manipulation tasks, and get better at performing new ones as they obtain more experience. The current paradigm in robot learning involves training a policy, in simulation or directly in the real world, with engineered rewards or demonstrations. However, for robots that need to keep learning [...]
3D-aware Conditional Image Synthesis
Abstract: We propose pix2pix3D, a 3D-aware conditional generative model for controllable photorealistic image synthesis. Given a 2D label map, such as a segmentation or edge map, our model learns to synthesize a corresponding image from different viewpoints. To enable explicit 3D user control, we extend conditional generative models with neural radiance fields. Given widely-available posed [...]
Robotic Climbing for Extreme Terrain Exploration
Abstract: Climbing robots can investigate scientifically valuable sites that are inaccessible to conventional rovers due to steep terrain features. Robots equipped with microspine grippers are particularly well-suited to ascending rocky cliff faces, but existing designs are either large and slow, or limited to relatively flat surfaces such as buildings. We have developed a novel free-climbing [...]
Multi-Objective Ergodic Search for Dynamic Information Maps
Abstract: Robotic explorers are essential tools for gathering information about regions that are inaccessible to humans. For applications like planetary exploration or search and rescue, robots use prior knowledge about the area to guide their search. Ergodic search methods find trajectories that effectively balance exploring unknown regions and exploiting prior information. In many search based [...]
Observing Assistance Preferences via User-controlled Arbitration in Shared Control
Abstract: What factors influence people’s preferences for robot assistance during human-robot collaboration tasks? Answering this question can help roboticists formalize definitions of assistance that lead to higher user satisfaction and increased user acceptance of assistive technology. Often in human robot collaboration literature, we see assistance paradigms that aim to optimize task success metrics and/or measures [...]
Safely Influencing Humans in Human-Robot Interaction
Abstract: Robots are becoming more common in industrial manufacturing because of their speed and precision on repetitive tasks, but they lack the flexibility of human collaborators. In order to take advantage of both humans’ and robots’ abilities, we investigate how to improve the efficiency of human-robot collaborations by making sure that robots both 1. stay [...]
Inductive Biases for Learning Long-Horizon Manipulation Skills
Abstract: Enabling robots to execute temporally extended sequences of behaviors is a challenging problem for learned systems, due to the difficulty of learning both high-level task information and low-level control. In this talk, I will discuss three approaches that we have developed to address this problem. Each of these approaches centers on an inductive bias [...]