The Max K-Armed Bandit: A New Model for Exploration Applied to Search Heuristic Selection - Robotics Institute Carnegie Mellon University

The Max K-Armed Bandit: A New Model for Exploration Applied to Search Heuristic Selection

Conference Paper, Proceedings of 20th National Conference on Artificial Intelligence (AAAI '05), Vol. 3, pp. 1355 - 1361, July, 2005

Abstract

The multiarmed bandit is often used as an analogy for the tradeoff between exploration and exploitation in search problems. The classic problem involves allocating trials to the arms of a multiarmed slot machine to maximize the expected sum of rewards. We pose a new variation of the multiarmed bandit---the Max K-Armed Bandit---in which trials must be allocated among the arms to maximize the expected best single sample reward of the series of trials. Motivation for the Max K-Armed Bandit is the allocation of restarts among a set of multistart stochastic search algorithms. We present an analysis of this Max K-Armed Bandit showing under certain assumptions that the optimal strategy allocates trials to the observed best arm at a rate increasing double exponentially relative to the other arms. This motivates an exploration strategy that follows a Boltzmann distribution with an exponentially decaying temperature parameter. We compare this exploration policy to policies that allocate trials to the observed best arm at rates faster (and slower) than double exponentially. The results confirm, for two scheduling domains, that the double exponential increase in the rate of allocations to the observed best heuristic outperforms the other approaches.

Content Area: 18.Search

Notes
Best Paper Award

BibTeX

@conference{Cicirello-2005-9239,
author = {Vincent Cicirello and Stephen Smith},
title = {The Max K-Armed Bandit: A New Model for Exploration Applied to Search Heuristic Selection},
booktitle = {Proceedings of 20th National Conference on Artificial Intelligence (AAAI '05)},
year = {2005},
month = {July},
volume = {3},
pages = {1355 - 1361},
}