Learning Multiple Tasks with a Sparse Matrix-Normal Penalty - Robotics Institute Carnegie Mellon University

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Y. Zhang and J. Schneider
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, Vol. 2, pp. 2550 - 2558, December, 2010

Abstract

In this paper, we propose a matrix-variate normal penalty with sparse inverse
covariances to couple multiple tasks. Learning multiple (parametric) models can be viewed as estimating a matrix of parameters, where rows and columns of the matrix correspond to tasks and features, respectively. Following the matrix-variate normal density, we design a penalty that decomposes the full covariance of matrix elements into the Kronecker product of row covariance and column covariance, which characterizes both task relatedness and feature representation. Several recently proposed methods are variants of the special cases of this formulation. To address the overfitting issue and select meaningful task and feature structures, we include sparse covariance selection into our matrix-normal regularization via l1 penalties on task and feature inverse covariances. We empirically study the proposed method and compare with related models in two real-world problems: detecting landmines in multiple fields and recognizing faces between different subjects. Experimental results show that the proposed framework provides an effective and flexible way to model various different structures of multiple tasks.

BibTeX

@conference{Zhang-2010-119811,
author = {Y. Zhang and J. Schneider},
title = {Learning Multiple Tasks with a Sparse Matrix-Normal Penalty},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2010},
month = {December},
volume = {2},
pages = {2550 - 2558},
}