Student Talks
Manipulating Objects with Challenging Visual and Geometric Properties
Abstract: Object manipulation is a well-studied domain in robotics, yet manipulation remains difficult for objects with visually and geometrically challenging properties. Visually challenging properties, such as transparency and specularity, break assumptions of Lambertian reflectance that existing methods rely on for grasp estimation. On the other hand, deformable objects such as cloth pose both visual and [...]
TIGRIS: An Informed Sampling-based Algorithm for Informative Path Planning
Abstract: In this talk I will present our sampling-based approach to informative path planning that allows us to tackle the challenges of large and high-dimensional search spaces. This is done by performing informed sampling in the high-dimensional continuous space and incorporating potential information gain along edges in the reward estimation. This method rapidly generates a [...]
Carnegie Mellon University
MSR Thesis Talk – Zhe Huang
Title: Distributed Reinforcement Learning for Autonomous Driving Abstract: Due to the complex and safety-critical nature of autonomous driving, recent works typically test their ideas on simulators designed for the very purpose of advancing self-driving research. Despite the convenience of modeling autonomous driving as a trajectory optimization problem, few of these methods resort to online reinforcement [...]
Carnegie Mellon University
MSR Thesis Talk- Xinjie Yao
Title: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning Abstract: Under shared autonomy, wheelchair users expect vehicles to provide safe and comfortable rides while following users’ high-level navigation plans. To find such a path, vehicles negotiate with different terrains and assess their traversal difficulty. Most prior works model surroundings either through geometric representations or semantic classifications, [...]
MS Thesis Talk – Shun Iwase
Title: Fast 6D Object Pose Refinement via Deep Texture Rendering Abstract: We present RePOSE, a fast iterative refinement method for 6D object pose estimation. Prior methods perform refinement by feeding zoomed-in input and rendered RGB images into a CNN and directly regressing an update of a refined pose. Their runtime is slow due to the [...]
Carnegie Mellon University
Resource-Constrained Learning and Inference for Visual Perception
Abstract: We have witnessed rapid advancement across major computer vision benchmarks over the past years. However, the top solutions' hidden computation cost prevents them from being practically deployable. For example, training large models until convergence may be prohibitively expensive in practice, and autonomous driving or augmented reality may require a reaction time that rivals that [...]
Trajectory Optimization for Thermally-Actuated Soft Planar Robot Limbs
Abstract: Practical use of robotic manipulators made from soft materials requires generating and executing complex motions. We present the first approach for generating trajectories of a thermally-actuated soft robotic manipulator. Based on simplified approximations of the soft arm and its antagonistic shape-memory alloy actuator coils, we justify a dynamics model of a discretized rigid manipulator [...]
Carnegie Mellon University
Physical Interaction and Manipulation of the Environment using Aerial Robots
Abstract: The physical interaction of aerial robots with their environment has countless potential applications and is an emerging area with many open challenges. Fully-actuated multirotors have been introduced to tackle some of these challenges. They provide complete control over position and orientation and eliminate the need for attaching a multi-DoF manipulation arm to the robot. [...]
Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis
Abstract: Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these [...]
Combining vision-based tactile, proximity, and global sensing for robotic manipulation
Abstract: I will begin by describing our work on visual servoing a manipulator and localizing objects using a robot-mounted suite of vision and vision-based tactile sensors, our results, algorithms used, and lessons learned. We show that by collocating tactile, and global (e.g. an RGB(D) camera) sensors, our setup can perform better than using each type [...]