Solving Uncertain Markov Decision Problems - Robotics Institute Carnegie Mellon University

Solving Uncertain Markov Decision Problems

Tech. Report, CMU-RI-TR-01-25, Robotics Institute, Carnegie Mellon University, August, 2001

Abstract

The authors consider the fundamental problem of finding good policies in uncertain models. It is demonstrated that although the general problem of finding the best policy with respect to the worst model is NP-hard, in the special case of a convex uncertainty set the problem is tractable. A stochastic dynamic game is proposed, and the security equilibrium solution of the game is shown to correspond to the value function under the worst model and the optimal controller. The authors demonstrate that the uncertain model approach can be used to solve a class of emph{nearly} Markovian Decision Problems, providing lower bounds on performance in stochastic models with higher-order interactions. The framework considered establishes connections between and generalizes paradigms of stochastic optimal, mini-max, and $H_infty$/robust control. Applications are considered, including robustness in reinforcement learning, planning in nearly Markovian decision processes, and bounding error due to sensor discretization in noisy, continuous state-spaces.

BibTeX

@techreport{Bagnell-2001-8291,
author = {J. Andrew (Drew) Bagnell and Andrew Y. Ng and Jeff Schneider},
title = {Solving Uncertain Markov Decision Problems},
year = {2001},
month = {August},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-01-25},
keywords = {Uncertainty, MDPs, robust control, stochastic optimal control, dynamic programming, reinforcement learning, risk sensitive control},
}