Abstract:
Robot-assisted dressing could benefit the lives of many people such as older adults and individuals with disabilities. In this talk, I will present two pieces of work that use robot learning for this assistive task. In the first half of the talk, I will present our work on developing a robot-assisted dressing system that is able to dress different garments on people with diverse poses, based on a learned policy. We show that with proper design of the policy architecture and Q function, reinforcement learning (RL) can be used to learn effective policies with partial point cloud observations that work well for dressing diverse garments. We further leverage policy distillation to combine multiple policies trained on different ranges of human arm poses into a single policy that works over a wide range of different arm poses. In the latter half, I will talk about another work that enhances the safety and performance of the previous system by combining the vision and the force modality. Due to limitations of simulating accurate force data when deformable garments interact with the human body, we learn a force dynamics model directly from real-world data. Our proposed method combines the vision-based policy, trained in simulation, with the force dynamics model, learned in the real world, by solving a constrained optimization problem to infer actions that facilitate the dressing process without applying excessive force on the person, resulting in a safer and better-performing robot-assisted dressing system.
Committee:
Zackory Erickson, Co-chair
David Held, Co-chair
Oliver Kroemer
Shikhar Bahl