Dimensionality Reduction Using the Sparse Linear Model
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 271 - 279, December, 2011
Abstract
We propose an approach for linear unsupervised dimensionality reduction, based on the sparse linear model that has been used to probabilistically interpret sparse coding. We formulate an optimization problem for learning a linear projection from the original signal domain to a lower-dimensional one in a way that approximately preserves, in expectation, pairwise inner products in the sparse domain. We derive solutions to the problem, present nonlinear extensions, and discuss relations to compressed sensing. Our experiments using facial images, texture patches, and images of object categories suggest that the approach can improve our ability to recover meaningful structure in many classes of signals.
BibTeX
@conference{Gkioulekas-2011-113443,author = {Ioannis Gkioulekas and Todd E. Zickler},
title = {Dimensionality Reduction Using the Sparse Linear Model},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2011},
month = {December},
pages = {271 - 279},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.