PhD Speaking Qualifier
Robotic Interestingness via Human-Informed Few-Shot Object Detection
Abstract: Interestingness recognition is crucial for decision making in autonomous exploration for mobile robots. Previous methods proposed an unsupervised online learning approach that can adapt to environments and detect interesting scenes quickly, but lack the ability to adapt to human-informed interesting objects. To solve this problem, we introduce a human-interactive framework, AirInteraction, that can detect [...]
FRIDA: Supporting Artistic Communication in Real-World Image Synthesis Through Diverse Input Modalities
Abstract: FRIDA, a Framework and Robotics Initiative for Developing Arts, is a robot painting system designed to translate an artist's high-level intentions into real world paintings. FRIDA can paint from combinations of input images, text, style examples, sounds, and sketches. Planning is performed in a differentiable, simulated environment created using real data from the robot [...]
Robust and Context-Aware Real-Time Collaborative Robot Handling with Dynamic Gesture Commands
Abstract: Real-time collaborative robot (cobot) handling is a task where the cobot maneuvers an object under human dynamic gesture commands. Enabling dynamic gesture commands is useful when the human needs to avoid direct contact with the robot or the object handled by the robot. However, the key challenge lies in the heterogeneity in human behaviors [...]
Dynamic Route Guidance in Vehicle Networks by Simulating Future Traffic Patterns
Abstract: Roadway congestion leads to wasted time and money and environmental damage. Since adding more roadway capacity is often not possible in urban environments, it is becoming more important to use existing road networks more efficiently. Toward this goal, recent research in real-time, schedule-driven intersection control has shown an ability to significantly reduce the delays [...]
Controllable Visual-Tactile Synthesis
Abstract: Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision [...]
Perceiving Particles Inside a Container using Dynamic Touch Sensing
Abstract: Dynamic touch sensing has shown potential for multiple tasks. In this talk, I will present how we utilize dynamic touch sensing to perceive particles inside a container with two tasks: classification of the particles inside a container and property estimation of the particles inside a container. First, we try to recognize what is inside [...]
Examining the Role of Adaptation in Human-Robot Collaboration
Abstract: Human and AI partners increasingly need to work together to perform tasks as a team. In order to act effectively as teammates, collaborative AI should reason about how their behaviors interplay with the strategies and skills of human team members as they coordinate on achieving joint goals. This talk will discuss a formalism for [...]
A Multi-view Synthetic and Real-world Human Activity Recognition Dataset
Abstract: Advancements in Human Activity Recognition (HAR) partially relies on the creation of datasets that cover a broad range of activities under various conditions. Unfortunately, obtaining and labeling datasets containing human activity is complex, laborious, and costly. One way to mitigate these difficulties with sufficient generality to provide robust activity recognition on unseen data is [...]
Dense 3D Representation Learning for Geometric Reasoning in Manipulation Tasks
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]
Learning novel objects during robot exploration via human-informed few-shot detection
Abstract: Autonomous mobile robots exploring in unfamiliar environments often need to detect target objects during exploration. Most prevalent approach is to use conventional object detection models, by training the object detector on large abundant image-annotation dataset, with a fixed and predefined categories of objects, and in advance of robot deployment. However, it lacks the capability [...]