Learning Positive Functions in a Hilbert Space - Robotics Institute Carnegie Mellon University

Learning Positive Functions in a Hilbert Space

J. Andrew (Drew) Bagnell and Amir-massoud Farahmand
Workshop Paper, NeurIPS '15 Workshop on Optimization (OPT '15), December, 2015

Abstract

We develop a method for learning positive functions by optimizing over SoSK, a reproducing kernel Hilbert space subject to a Sum-of-Squares (SoS) constraint. This constraint ensures that only nonnegative functions are learned. We establish a new representer theorem that demonstrates that the regularized convex loss minimization subject to the SoS constraint has a unique solution and moreover, its solution lies on a finite dimensional subspace of an RKHS that is defined by data. Furthermore, we show how this optimization problem can be formulated as a semidefinite program. We conclude with examples of learning such functions.

BibTeX

@workshop{Bagnell-2015-6048,
author = {J. Andrew (Drew) Bagnell and Amir-massoud Farahmand},
title = {Learning Positive Functions in a Hilbert Space},
booktitle = {Proceedings of NeurIPS '15 Workshop on Optimization (OPT '15)},
year = {2015},
month = {December},
}