Projection Penalties: Dimension Reduction without Loss - Robotics Institute Carnegie Mellon University

Projection Penalties: Dimension Reduction without Loss

Y. Zhang and J. Schneider
Conference Paper, Proceedings of (ICML) International Conference on Machine Learning, pp. 1223 - 1230, June, 2010

Abstract

Dimension reduction is popular for learning predictive models in high-dimensional spaces. It can highlight the relevant part of the feature space and avoid the curse of dimensionality. However, it can also be harmful because any reduction loses information. In this paper, we propose the projection penalty framework to make use of dimension reduction without losing valuable information. Reducing the feature space before learning predictive models can be viewed as restricting the model search to some parameter subspace. The idea of projection penalties is that instead of restricting the search to a parameter subspace, we can search in the full space but penalize the projection distance to this subspace. Dimension reduction is used to guide the search, rather than to restrict it. We propose projection penalties for linear dimension reduction, and then generalize to kernel-based reduction and other nonlinear methods. We test projection penalties with various dimension reduction techniques in different prediction tasks, including principal component regression and partial least squares in regression tasks, kernel dimension reduction in face recognition, and latent topic modeling in text classification. Experimental results show that projection penalties are a more effective and reliable way to make use of dimension reduction techniques.

BibTeX

@conference{Zhang-2010-119812,
author = {Y. Zhang and J. Schneider},
title = {Projection Penalties: Dimension Reduction without Loss},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {2010},
month = {June},
pages = {1223 - 1230},
}