Implicit Feature Selection with the Value Difference Metric - Robotics Institute Carnegie Mellon University

Implicit Feature Selection with the Value Difference Metric

Terence Payne and Peter Edwards
Conference Paper, Proceedings of 13th European Conference on Artificial Intelligence (ECAI '98), pp. 450 - 454, August, 1998

Abstract

The nearest neighbour paradigm provides an effective approach to supervised learning. However, it is especially susceptible to the presence of irrelevant attributes. Whilst many approaches have been proposed that select only the most relevant attributes within a data set, these approaches involve pre-processing the data in some way, and can often be computationally complex. The Value Difference Metric (VDM) is a symbolic distance metric used by a number of different nearest neighbour learning algorithms. This paper demonstrates how the VDM can be used to reduce the impact of irrelevant attributes on classification accuracy without the need for pre-processing the data. We illustrate how this metric uses simple probabilistic techniques to weight features in the instance space, and then apply this weighting technique to an alternative symbolic distance metric. The resulting distance metrics are compared in terms of classification accuracy, on a number of real-world and artificial data sets.

BibTeX

@conference{Payne-1998-16615,
author = {Terence Payne and Peter Edwards},
title = {Implicit Feature Selection with the Value Difference Metric},
booktitle = {Proceedings of 13th European Conference on Artificial Intelligence (ECAI '98)},
year = {1998},
month = {August},
editor = {Henri Prade},
pages = {450 - 454},
publisher = {John Wiley & Sons},
address = {New York, NY},
}