Learning discrete Bayesian models for autonomous agent navigation
Conference Paper, Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '99), pp. 137 - 143, November, 1999
Abstract
Partially observable Markov decision processes (POMDPs) are a convenient representation for reasoning and planning in mobile robot applications. We investigate two algorithms for learning POMDPs from series of observation/action pairs by comparing their performance in fourteen synthetic worlds in conjunction with four planning algorithms. Experimental results suggest that the traditional Baum-Welch algorithm learns better the structure of worlds specifically designed to impede the agent, while a best-first model merging algorithm originally due to Stolcke and Omohundro (1993) performs better in more benign worlds, including such model of typical real-world robot fetching tasks.
BibTeX
@conference{Nikovski-1999-15057,author = {Daniel Nikovski and Illah Nourbakhsh},
title = {Learning discrete Bayesian models for autonomous agent navigation},
booktitle = {Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '99)},
year = {1999},
month = {November},
pages = {137 - 143},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.