In this talk, I will describe approaches we developed to learn a better observation model for active perception. Our key insight is that most actions in the action space are not useful, so we only need to focus on learning the observation model for the actions that actually matter. We propose a family of learning algorithms that can discover useful actions and learn a good observation model of them. We demonstrate our approach with a case study of estimating the viscosity of the liquid in a bottle. Our approach discovers that shaking the bottle at 1.5 Hz and tilting the bottle can accurately predict liquid viscosity ranging from 1 cP (water) to 100,000 cP (toothpaste). Through this study, we show that our approach outperforms all existing methods in both simulations and real-world scenarios.
This event has passed.
PhD Speaking Qualifier
July
24
Mon
An Effective Learning Framework for Active Perception and a Case Study on Liquid Property Estimation
Abstract:
Active perception refers to a perception process where robot actions are taken to improve perception. To do this, the robot needs an observation model that knows what it will observe based on the actions it takes. However, existing approaches struggle to learn a good observation model since it needs to account for all possible robot actions.
Committee:
Wenzhen Yuan, Chair
Michael Kaess
Christopher G. Atkeson
Sudharshan Suresh