Student Talks
Goal-Expressive Movement for Social Navigation: Where and When to Behave Legibly
Abstract: Robots often need to communicate their navigation goals to assist observers in anticipating the robot's future actions. Enabling observers to infer where a robot is going from its movements is particularly important as robots begin to share workplaces, sidewalks, and social spaces with humans. We can use legible motion, or movements that use intentional [...]
Eye Gaze for Intelligent Driving
Abstract: Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving “intelligence” today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on [...]
Learning to Perceive and Predict Everyday Interactions
Abstract: This thesis aims to build computer systems to understand everyday hand-object interactions in the physical world – both perceiving ongoing interactions in 3D space and predicting possible interactions. This ability is crucial for applications such as virtual reality, robotic manipulations, and augmented reality. The problem is inherently ill-posed due to the challenges of one-to-many [...]
Sensorized Soft Material Systems with Integrated Electronics and Computing
Abstract: The integration of soft and multifunctional materials in emerging technologies is becoming more widespread due to their ability to enhance or improve functionality in ways not possible using typical rigid alternatives. This trend is evident in various fields. For example, wearable technologies are increasingly designed using soft materials to improve modulus compatibility with biological [...]
Deep Learning for Tactile Sensing: Development to Deployment
Abstract: The role of sensing is widely acknowledged for robots interacting with the physical environment. However, few contemporary sensors have gained widespread use among roboticists. This thesis proposes a framework for incorporating sensors into a robot learning paradigm, from development to deployment, through the lens of ReSkin -- a versatile and scalable magnetic tactile sensor. [...]
Learning and Translating Temporal Abstractions of Behaviour across Humans and Robots
Abstract: Humans are remarkably adept at learning to perform tasks by imitating other people demonstrating these tasks. Key to this is our ability to reason abstractly about the high-level strategy of the task at hand (such as the recipe of cooking a dish) and the behaviours needed to solve this task (such as the behaviour [...]
Towards Underwater 3D Visual Perception
Abstract: With modern robotic technologies, seafloor imageries have become more accessible to both researchers and the public. This thesis leverages deep learning and 3D vision techniques to deliver valuable information from seafloor image observations. Despite the widespread use of deep learning and 3D vision algorithms across various fields, underwater imaging presents unique challenges, such as [...]
Assistive value alignment using in-situ naturalistic human behaviors
Abstract: As collaborative robots are increasingly deployed in personal environments, such as the home, it is critical they take actions to complete tasks consistent with personal preferences. Determining personal preferences for completing household chores, however, is challenging. Many household chores, such as setting a table or loading a dishwasher, are sequential and open-vocabulary, creating a [...]
Teaching Robots to Drive: Scalable Policy Improvement via Human Feedback
Abstract: A long-standing problem in autonomous driving is grappling with the long-tail of rare scenarios for which little or no data is available. Although learning-based methods scale with data, it is unclear that simply ramping up data collection will eventually make this problem go away. Approaches which rely on simulation or world modeling offer some [...]
Exploration for Continually Improving Robots
Abstract: Data-driven learning is a powerful paradigm for enabling robots to learn skills. Current prominent approaches involve collecting large datasets of robot behavior via teleoperation or simulation, to then train policies. For these policies to generalize to diverse tasks and scenes, there is a large burden placed on constructing a rich initial dataset, which is [...]