Bayesian Nonparametric Kernel-Learning
Abstract
Kernel methods are ubiquitous tools in machine learning. They have proven to be effective in many domains and tasks. Yet, kernel methods often require the user to select a predefined kernel to build an estimator with. However, there is often little reason for the a priori selection of a kernel. Even if a universal approximating kernel is selected, the quality of the finite sample estimator may be greatly effected by the choice of kernel. Furthermore, when directly applying kernel methods, one typically needs to compute a $N \times N$ Gram matrix of pairwise kernel evaluations to work with a dataset of $N$ instances. The computation of this Gram matrix precludes the direct application of kernel methods on large datasets. In this paper we introduce Bayesian nonparmetric kernel (BaNK) learning, a generic, data-driven framework for scalable learning of kernels. We show that this framework can be used for performing both regression and classification tasks and scale to large datasets. Furthermore, we show that BaNK outperforms several other scalable approaches for kernel learning on a variety of real world datasets.
BibTeX
@conference{Oliva-2016-119752,author = {J. Oliva and A. Dubey and A. Wilson and B. Poczos and J. Schneider and E. Xing},
title = {Bayesian Nonparametric Kernel-Learning},
booktitle = {Proceedings of 19th International Conference on Artificial Intelligence and Statistics (AISTATS '16)},
year = {2016},
month = {May},
volume = {41},
pages = {1078 - 1086},
}