
In this talk, I will describe approaches we developed to learn a better observation model for active perception. Our key insight is that most actions in the action space are not useful, so we only need to focus on learning the observation model for the actions that actually matter. We propose a family of learning algorithms that can discover useful actions and learn a good observation model of them. We demonstrate our approach with a case study of estimating the viscosity of the liquid in a bottle. Our approach discovers that shaking the bottle at 1.5 Hz and tilting the bottle can accurately predict liquid viscosity ranging from 1 cP (water) to 100,000 cP (toothpaste). Through this study, we show that our approach outperforms all existing methods in both simulations and real-world scenarios.