Active Search and Bandits on Graphs using Sigma-Optimality - Robotics Institute Carnegie Mellon University

Active Search and Bandits on Graphs using Sigma-Optimality

Y. Ma, T. Huang, and J. Schneider
Conference Paper, Proceedings of 31st Conference on Uncertainty in Artificial Intelligence (UAI '15), pp. 542 - 551, July, 2015

Abstract

Many modern information access problems involve highly complex patterns that cannot be handled by traditional keyword based search. Active Search is an emerging paradigm that helps users quickly find relevant information by efficiently collecting and learning from user feedback. We consider active search on graphs, where the nodes represent the set of instances users want to search over and the edges encode pairwise similarity among the instances. Existing active search algorithms are either short of theoretical guarantees or inadequate for graph data. Motivated by recent advances in active learning on graphs, namely the Σ-optimality selection criterion, we propose new active search algorithms suitable for graphs with theoretical guarantees and demonstrate their effectiveness on several real-world datasets. Our problem setting is also similar to multiarmed bandits in that the number of positive instances found under a budget can be seen as the cumulative reward from pulling binary arms (without repetitions). We also discussed theoretical advantages for applying Σ-optimality as the exploration term for bandits on graphs.

BibTeX

@conference{Ma-2015-119777,
author = {Y. Ma and T. Huang and J. Schneider},
title = {Active Search and Bandits on Graphs using Sigma-Optimality},
booktitle = {Proceedings of 31st Conference on Uncertainty in Artificial Intelligence (UAI '15)},
year = {2015},
month = {July},
pages = {542 - 551},
}