Memory based stochastic optimization for validation and tuning of function approximators - Robotics Institute Carnegie Mellon University

Memory based stochastic optimization for validation and tuning of function approximators

Workshop Paper, 6th International Workshop on Artificial Intelligence and Statistics (AISTATS ’97), pp. 165 - 172, January, 1997

Abstract

This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation.

BibTeX

@workshop{Dubrawski-1997-121913,
author = {A. Dubrawski and J. Schneider},
title = {Memory based stochastic optimization for validation and tuning of function approximators},
booktitle = {Proceedings of 6th International Workshop on Artificial Intelligence and Statistics (AISTATS ’97)},
year = {1997},
month = {January},
pages = {165 - 172},
}