Student Talks
Eye Gaze for Intelligent Driving
Abstract: Intelligent vehicles have been proposed as one path to increasing vehicular safety and reduce on-road crashes. Driving intelligence has taken many forms, ranging from simple blind spot occupancy or forward collision warnings to lane keeping and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on [...]
Dense 3D Representation Learning for Geometric Reasoning in Manipulation Tasks
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]
Passive Coupling in Robot Swarms
Abstract: In unstructured environments, ant colonies demonstrate remarkable abilities to adaptively form functional structures in response to various obstacles, such as stairs, gaps, and holes. Drawing inspiration from these creatures, robot swarms can collectively exhibit complex behaviors and achieve tasks that individual robots cannot accomplish. Existing modular robot platforms that employ dynamic coupling and decoupling [...]
Learning novel objects during robot exploration via human-informed few-shot detection
Abstract: Autonomous mobile robots exploring in unfamiliar environments often need to detect target objects during exploration. Most prevalent approach is to use conventional object detection models, by training the object detector on large abundant image-annotation dataset, with a fixed and predefined categories of objects, and in advance of robot deployment. However, it lacks the capability [...]
Learning to Perceive and Predict Everyday Interactions
Abstract: This thesis aims to develop a computer vision system that can understand everyday human interactions with rich spatial information. Such systems can benefit VR/AR to perceive the reality and modify its virtual twin, and robotics to learn manipulation by watching human. Previous methods have been limited to constrained lab environment or pre-selected objects with [...]
Learning Models and Cost Functions from Unlabeled Data for Off-Road Driving
Abstract: Off-road driving is an important instance of navigation in unstructured environments, which is a key robotics problem with many applications, such as exploration, agriculture, disaster response and defense. The key challenge in off-road driving is to be able to take in high dimensional, multi-modal sensing data and use it to make intelligent decisions on [...]
Active Vision for Manipulation
Abstract: Decades of research on computer vision has highlighted the importance of active sensing -- where the agent actively controls parameters of the sensor to improve perception. Research on active perception the context of robotic manipulation has demonstrated many novel and robust sensing strategies involving a multitude of sensors like RGB and RGBD cameras, a [...]
Continually Improving Robots
Abstract: General purpose robots should be able to perform arbitrary manipulation tasks, and get better at performing new ones as they obtain more experience. The current paradigm in robot learning involves training a policy, in simulation or directly in the real world, with engineered rewards or demonstrations. However, for robots that need to keep learning [...]
Carnegie Mellon University
Parallelized Search on Graphs with Expensive-to-Compute Edges
Abstract: Search-based planning algorithms enable robots to come up with well-reasoned long-horizon plans to achieve a given task objective. They formulate the problem as a shortest path problem on a graph embedded in the state space of the domain. Much research has been dedicated to achieving greater planning speeds to enable robots to respond quickly [...]
MSR Thesis Talk: Chonghyuk Song
Title: Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis Abstract: We explore the task of embodied view synthesis from monocular videos of deformable scenes. Given a minute-long RGBD video of people interacting with their pets, we render the scene from novel camera trajectories derived from in-scene motion of actors: (1) egocentric cameras that simulate the point [...]