Near Optimal Bayesian Active Learning for Decision Making
Abstract
How should we gather information to make effective decisions? We address Bayesian active learning and experimental design problems, where we sequentially select tests to reduce uncertainty about a set of hypotheses. Instead of minimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses. Our goal is to drive uncertainty into a single decision region as quickly as possible. We identify necessary and sufficient conditions for correctly identifying a decision region that contains all hypotheses consistent with observations. We develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove that is competitive with the intractable optimal policy. Our efficient implementation of the algorithm relies on computing subsets of the complete homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on two practical applications: approximate comparison-based learning and active localization using a robot manipulator.
Extended version of paper with identical title at the International conference on Artificial Intelligence and Statistics (AISTATS) 2014. Please cite AISTATS version instead.
BibTeX
@techreport{Javdani-2014-7853,author = {Shervin Javdani and Yuxin Chen and Amin Karbasi and Andreas Krause and J. Andrew (Drew) Bagnell and Siddhartha Srinivasa},
title = {Near Optimal Bayesian Active Learning for Decision Making},
year = {2014},
month = {April},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-14-03},
}