Learning Compressible Models - Robotics Institute Carnegie Mellon University

Learning Compressible Models

Y. Zhang, J. Schneider, and A. Dubrawski
Conference Paper, Proceedings of SIAM International Conference on Data Mining (SDM '10), pp. 872 - 881, April, 2010

Abstract

In this paper, we study the combination of compression and ℓ1-norm regularization in a machine learning context: learning compressible models. By including a compression operation into the ℓ1 regularization, the assumption on model sparsity is relaxed to compressibility: model coefficients are compressed before being penalized, and sparsity is achieved in a compressed domain rather than the original space. We focus on the design of different compression operations, by which we can encode various compressibility assumptions and inductive biases, e.g., piecewise local smoothness, compacted energy in the frequency domain, and semantic correlation. We show that use of a compression operation provides an opportunity to leverage auxiliary information from various sources, e.g., domain knowledge, coding theories, unlabeled data. We conduct extensive experiments on brain-computer interfacing, handwritten character recognition and text classification. Empirical results show clear improvements in prediction performance by including compression in ℓ1 regularization. We also analyze the learned model coefficients under appropriate compressibility assumptions, which further demonstrate the advantages of learning compressible models instead of sparse models.

BibTeX

@conference{Zhang-2010-119815,
author = {Y. Zhang and J. Schneider and A. Dubrawski},
title = {Learning Compressible Models},
booktitle = {Proceedings of SIAM International Conference on Data Mining (SDM '10)},
year = {2010},
month = {April},
pages = {872 - 881},
}