PhD Speaking Qualifier
Solving Constraint Tasks with Memory-Based Learning
Abstract: In constraint tasks, the current task state heavily limits what actions are available to an agent. Mechanical constraints exist in many common tasks such as construction, disassembly, and rearrangement and task space constraints exist in an even broader range of tasks. Deep reinforcement learning algorithms have typically struggled with constraint tasks for two main [...]
Head-Worn Assistive Teleoperation of Mobile Manipulators
Abstract: Mobile manipulators in the home can provide increased autonomy to individuals with severe motor impairments, who often cannot complete activities of daily living (ADLs) without the help of a caregiver. Teleoperation of an assistive mobile manipulator could enable an individual with motor impairments to independently perform self-care and household tasks, yet limited motor function [...]
Text Classification with Class Descriptions Only
Abstract: In this work, we introduce KeyClass, a weakly-supervised text classification framework that learns from class-label descriptions only, without the need to use any human-labeled documents. It leverages the linguistic domain knowledge stored within pre-trained language models and data programming to automatically label documents. We demonstrate its efficacy and flexibility by comparing it to state-of-the-art [...]
Multi-Object Tracking in the Crowd
Abstract: In this talk, I will focus on the problem of multi-object tracking in crowded scenes. Tracking within crowds is particularly challenging due to heavy occlusion and frequent crossover between tracking targets. The problem becomes more difficult when we only have noisy bounding boxes due to background and neighboring objects. Existing tracking methods try to [...]
Magnification-invariant retinal distance estimation using a laser aiming beam
Abstract: Retinal surgery procedures like epiretinal membrane peeling and retinal vein cannulation require surgeons to manipulate very delicate structures in the eye with little room for error. Many robotic surgery systems have been developed to help surgeons and enforce safeguards during these demanding procedures. One essential piece of information that is required to create and [...]
Bridging Humans and Generative Models
Abstract: Deep generative models make visual content creation more accessible to novice and professional users alike by automating the synthesis of diverse, realistic content based on a collected dataset. People often use generative models as data-driven sources, making it challenging to personalize a model easily. Currently, personalizing a model requires careful data curation, which is [...]
Impulse considerations for reasoning about intermittent contacts
Abstract: Many of our interactions with the environment involve making and breaking contacts. However, it is not always obvious how one should reason about these intermittent contacts (sequence, timings, locations) in an online and adaptive way. This is particularly relevant in gait generation for legged locomotion control, where it is standard to simply predefine and [...]
Robust Incremental Smoothing and Mapping
Abstract: In this work we present a method for robust optimization for online incremental Simultaneous Localization and Mapping (SLAM). Due to the NP-Hardness of data association in the presence of perceptual aliasing, tractable (approximate) approaches to data association will produce erroneous measurements. We require SLAM back-ends that can converge to accurate solutions in the presence [...]
Robotic Interestingness via Human-Informed Few-Shot Object Detection
Abstract: Interestingness recognition is crucial for decision making in autonomous exploration for mobile robots. Previous methods proposed an unsupervised online learning approach that can adapt to environments and detect interesting scenes quickly, but lack the ability to adapt to human-informed interesting objects. To solve this problem, we introduce a human-interactive framework, AirInteraction, that can detect [...]
FRIDA: Supporting Artistic Communication in Real-World Image Synthesis Through Diverse Input Modalities
Abstract: FRIDA, a Framework and Robotics Initiative for Developing Arts, is a robot painting system designed to translate an artist's high-level intentions into real world paintings. FRIDA can paint from combinations of input images, text, style examples, sounds, and sketches. Planning is performed in a differentiable, simulated environment created using real data from the robot [...]
Robust and Context-Aware Real-Time Collaborative Robot Handling with Dynamic Gesture Commands
Abstract: Real-time collaborative robot (cobot) handling is a task where the cobot maneuvers an object under human dynamic gesture commands. Enabling dynamic gesture commands is useful when the human needs to avoid direct contact with the robot or the object handled by the robot. However, the key challenge lies in the heterogeneity in human behaviors [...]
Dynamic Route Guidance in Vehicle Networks by Simulating Future Traffic Patterns
Abstract: Roadway congestion leads to wasted time and money and environmental damage. Since adding more roadway capacity is often not possible in urban environments, it is becoming more important to use existing road networks more efficiently. Toward this goal, recent research in real-time, schedule-driven intersection control has shown an ability to significantly reduce the delays [...]
Controllable Visual-Tactile Synthesis
Abstract: Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision [...]
Perceiving Particles Inside a Container using Dynamic Touch Sensing
Abstract: Dynamic touch sensing has shown potential for multiple tasks. In this talk, I will present how we utilize dynamic touch sensing to perceive particles inside a container with two tasks: classification of the particles inside a container and property estimation of the particles inside a container. First, we try to recognize what is inside [...]
Examining the Role of Adaptation in Human-Robot Collaboration
Abstract: Human and AI partners increasingly need to work together to perform tasks as a team. In order to act effectively as teammates, collaborative AI should reason about how their behaviors interplay with the strategies and skills of human team members as they coordinate on achieving joint goals. This talk will discuss a formalism for [...]
A Multi-view Synthetic and Real-world Human Activity Recognition Dataset
Abstract: Advancements in Human Activity Recognition (HAR) partially relies on the creation of datasets that cover a broad range of activities under various conditions. Unfortunately, obtaining and labeling datasets containing human activity is complex, laborious, and costly. One way to mitigate these difficulties with sufficient generality to provide robust activity recognition on unseen data is [...]
Dense 3D Representation Learning for Geometric Reasoning in Manipulation Tasks
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]
Learning novel objects during robot exploration via human-informed few-shot detection
Abstract: Autonomous mobile robots exploring in unfamiliar environments often need to detect target objects during exploration. Most prevalent approach is to use conventional object detection models, by training the object detector on large abundant image-annotation dataset, with a fixed and predefined categories of objects, and in advance of robot deployment. However, it lacks the capability [...]
Continually Improving Robots
Abstract: General purpose robots should be able to perform arbitrary manipulation tasks, and get better at performing new ones as they obtain more experience. The current paradigm in robot learning involves training a policy, in simulation or directly in the real world, with engineered rewards or demonstrations. However, for robots that need to keep learning [...]
3D-aware Conditional Image Synthesis
Abstract: We propose pix2pix3D, a 3D-aware conditional generative model for controllable photorealistic image synthesis. Given a 2D label map, such as a segmentation or edge map, our model learns to synthesize a corresponding image from different viewpoints. To enable explicit 3D user control, we extend conditional generative models with neural radiance fields. Given widely-available posed [...]
Robotic Climbing for Extreme Terrain Exploration
Abstract: Climbing robots can investigate scientifically valuable sites that are inaccessible to conventional rovers due to steep terrain features. Robots equipped with microspine grippers are particularly well-suited to ascending rocky cliff faces, but existing designs are either large and slow, or limited to relatively flat surfaces such as buildings. We have developed a novel free-climbing [...]