Multi-fidelity Gaussian Process Bandit Optimisation
Abstract
In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function $f$. Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to $f$ may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions and use the expensive evaluations to $f$ in a small promising region and speedily identify the optimum. We formalise this task as a \emph{multi-fidelity} bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop a method based on upper confidence bound techniques and prove that it exhibits precisely the above behaviour, hence achieving better regret than strategies which ignore multi-fidelity information. Our method outperforms such naive strategies on several synthetic and real experiments.
BibTeX
@article{Kandasamy-2019-119712,author = {K. Kandasamy and G. Dasarathy and J. Oliva and J. Schneider and B. Poczos},
title = {Multi-fidelity Gaussian Process Bandit Optimisation},
journal = {Journal of Artificial Intelligence Research},
year = {2019},
month = {March},
volume = {66},
pages = {151 - 196},
}