Myopic Posterior Sampling for Adaptive Goal Oriented Design of Experiments
Abstract
Bayesian methods for adaptive decision-making, such as Bayesian optimisation, active learning, and active search have seen great success in relevant applications. However, real world data collection tasks are more broad and complex, as we may need to achieve a combination of the above goals and/or application specific goals. In such scenarios, specialised methods have limited applicability. In this work, we design a new myopic strategy for a wide class of adaptive design of experiment (DOE) problems, where we wish to collect data in order to fulfil a given goal. Our approach, Myopic Posterior Sampling (MPS), which is inspired by the classical posterior sampling algorithm for multi-armed bandits, enables us to address a broad suite of DOE tasks where a practitioner may incorporate domain expertise about the system and specify her desired goal via a reward function. Empirically, this general-purpose strategy is competitive with more specialised methods in a wide array of synthetic and real world DOE tasks. More importantly, it enables addressing complex DOE goals where no existing method seems applicable. On the theoretical side, we leverage ideas from adaptive submodularity and reinforcement learning to derive conditions under which MPS achieves sublinear regret against natural benchmark policies.
BibTeX
@conference{Kandasamy-2019-119735,author = {K. Kandasamy and W. Neiswanger and R. Zhang and A. Krishnamurthy and J. Schneider and B. Poczos},
title = {Myopic Posterior Sampling for Adaptive Goal Oriented Design of Experiments},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {2019},
month = {June},
volume = {97},
pages = {3222 - 3232},
}