SpeedBoost: Anytime Prediction with Uniform Near-Optimality - Robotics Institute Carnegie Mellon University

SpeedBoost: Anytime Prediction with Uniform Near-Optimality

Alexander Grubb and J. Andrew (Drew) Bagnell
Conference Paper, Proceedings of 15th International Conference on Artificial Intelligence and Statistics (AISTATS '12), pp. 458 - 466, April, 2012

Abstract

We present SpeedBoost, a natural extension of functional gradient descent, for learning anytime predictors, which automatically trade computation time for predictive accuracy by selecting from a set of simpler candidate predictors. These anytime predictors not only generate approximate predictions rapidly, but are capable of using extra resources at prediction time, when available, to improve performance. We also demonstrate how our framework can be used to select weak predictors which target certain subsets of the data, allowing for efficient use of computational resources on difficult examples. We also show that variants of the SpeedBoost algorithm produce predictors which are provably competitive with any possible sequence of weak predictors with the same total complexity.

BibTeX

@conference{Grubb-2012-7453,
author = {Alexander Grubb and J. Andrew (Drew) Bagnell},
title = {SpeedBoost: Anytime Prediction with Uniform Near-Optimality},
booktitle = {Proceedings of 15th International Conference on Artificial Intelligence and Statistics (AISTATS '12)},
year = {2012},
month = {April},
pages = {458 - 466},
keywords = {Boosting, Anytime Prediction, Machine Learning, Functional Gradient Methods},
}