Student Talks
Solving Constraint Tasks with Memory-Based Learning
Abstract: In constraint tasks, the current task state heavily limits what actions are available to an agent. Mechanical constraints exist in many common tasks such as construction, disassembly, and rearrangement and task space constraints exist in an even broader range of tasks. Deep reinforcement learning algorithms have typically struggled with constraint tasks for two main [...]
Head-Worn Assistive Teleoperation of Mobile Manipulators
Abstract: Mobile manipulators in the home can provide increased autonomy to individuals with severe motor impairments, who often cannot complete activities of daily living (ADLs) without the help of a caregiver. Teleoperation of an assistive mobile manipulator could enable an individual with motor impairments to independently perform self-care and household tasks, yet limited motor function [...]
Text Classification with Class Descriptions Only
Abstract: In this work, we introduce KeyClass, a weakly-supervised text classification framework that learns from class-label descriptions only, without the need to use any human-labeled documents. It leverages the linguistic domain knowledge stored within pre-trained language models and data programming to automatically label documents. We demonstrate its efficacy and flexibility by comparing it to state-of-the-art [...]
Multi-Object Tracking in the Crowd
Abstract: In this talk, I will focus on the problem of multi-object tracking in crowded scenes. Tracking within crowds is particularly challenging due to heavy occlusion and frequent crossover between tracking targets. The problem becomes more difficult when we only have noisy bounding boxes due to background and neighboring objects. Existing tracking methods try to [...]
Utilizing Panoptic Segmentation and a Locally-Conditioned Neural Representation to Build Richer 3D Maps
Abstract: Advances in deep-learning based perception and maturation of volumetric RGB-D mapping algorithms have allowed autonomous robots to be deployed in increasingly complex environments. For robust operation in open-world conditions however, perceptual capabilities are still lacking. Limitations of commodity depth sensors mean that complex geometries and textures cannot be reconstructed accurately. Semantic understanding is still [...]
Magnification-invariant retinal distance estimation using a laser aiming beam
Abstract: Retinal surgery procedures like epiretinal membrane peeling and retinal vein cannulation require surgeons to manipulate very delicate structures in the eye with little room for error. Many robotic surgery systems have been developed to help surgeons and enforce safeguards during these demanding procedures. One essential piece of information that is required to create and [...]
Bridging Humans and Generative Models
Abstract: Deep generative models make visual content creation more accessible to novice and professional users alike by automating the synthesis of diverse, realistic content based on a collected dataset. People often use generative models as data-driven sources, making it challenging to personalize a model easily. Currently, personalizing a model requires careful data curation, which is [...]
Impulse considerations for reasoning about intermittent contacts
Abstract: Many of our interactions with the environment involve making and breaking contacts. However, it is not always obvious how one should reason about these intermittent contacts (sequence, timings, locations) in an online and adaptive way. This is particularly relevant in gait generation for legged locomotion control, where it is standard to simply predefine and [...]
Multi-Human 3D Reconstruction from Monocular RGB Videos
Abstract: We study the problem of multi-human 3D reconstruction from RGB videos captured in the wild. Humans have dynamic motion, and reconstructing them in arbitrary settings is key to building immersive social telepresence, assistive humanoid robots, and augmented reality systems. However, creating such a system requires addressing fundamental issues with previous works regarding the data [...]
Learning and Translating Temporal Abstractions across Humans and Robots
Abstract: Humans possess a remarkable ability to learn to perform tasks from a variety of different sources-from language, instructions, demonstration, etc. In each case, they are able to easily extract the high-level strategy to solve the task, such as the recipe of cooking a dish, whilst ignoring irrelevant details, such as the precise shape of [...]