Learning Positive Functions in a Hilbert Space
Workshop Paper, NeurIPS '15 Workshop on Optimization (OPT '15), December, 2015
Abstract
We develop a method for learning positive functions by optimizing over SoSK, a reproducing kernel Hilbert space subject to a Sum-of-Squares (SoS) constraint. This constraint ensures that only nonnegative functions are learned. We establish a new representer theorem that demonstrates that the regularized convex loss minimization subject to the SoS constraint has a unique solution and moreover, its solution lies on a finite dimensional subspace of an RKHS that is defined by data. Furthermore, we show how this optimization problem can be formulated as a semidefinite program. We conclude with examples of learning such functions.
BibTeX
@workshop{Bagnell-2015-6048,author = {J. Andrew (Drew) Bagnell and Amir-massoud Farahmand},
title = {Learning Positive Functions in a Hilbert Space},
booktitle = {Proceedings of NeurIPS '15 Workshop on Optimization (OPT '15)},
year = {2015},
month = {December},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.