Linear-Time Learning on Distributions with Approximate Kernel Embeddings - Robotics Institute Carnegie Mellon University

Linear-Time Learning on Distributions with Approximate Kernel Embeddings

D. Sutherland, J. Oliva, B. Poczos, and J. Schneider
Conference Paper, Proceedings of 30th AAAI Conference on Artificial Intelligence (AAAI '16), pp. 2073 - 2079, February, 2016

Abstract

Many interesting machine learning problems are best posed by considering instances that are distributions, or sample sets drawn from distributions. Previous work devoted to machine learning tasks with distributional inputs has done so through pairwise kernel evaluations between pdfs (or sample sets). While such an approach is fine for smaller datasets, the computation of an $N \times N$ Gram matrix is prohibitive in large datasets. Recent scalable estimators that work over pdfs have done so only with kernels that use Euclidean metrics, like the $L_2$ distance. However, there are a myriad of other useful metrics available, such as total variation, Hellinger distance, and the Jensen-Shannon divergence. This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics, allowing estimators using such kernels to scale to large datasets by working in a primal space, without computing large Gram matrices. We provide an analysis of the approximation error in using our proposed random features and show empirically the quality of our approximation both in estimating a Gram matrix and in solving learning tasks in real-world and synthetic data.

BibTeX

@conference{Sutherland-2016-119758,
author = {D. Sutherland and J. Oliva and B. Poczos and J. Schneider},
title = {Linear-Time Learning on Distributions with Approximate Kernel Embeddings},
booktitle = {Proceedings of 30th AAAI Conference on Artificial Intelligence (AAAI '16)},
year = {2016},
month = {February},
pages = {2073 - 2079},
}