Inverse Reinforcement Learning with Conditional Choice Probabilities
Abstract
We make an important connection to existing results in econometrics to describe an alternative formulation of inverse reinforcement learning (IRL). In particular, we describe an algorithm to solve the IRL problem, using easy-to-compute estimates of the Conditional Choice Probability (CCP) vector, which is the policy function of an expert integrated over factors econometricians cannot observe. Using the language of structural econometrics, we reframe the optimal decision problem and introduce an alternative representation of differences in conditional value functions due to (Hotz and Miller, 1993), based on CCPs. Furthermore, we show how the use of CCPs also reduces the computational cost of solving the IRL problem. Specifically, under the CCP representation, we show how one can avoid repeated calls to the dynamic programming subroutine typically used in IRL methods. We show, via extensive experimentation on standard IRL benchmarks that CCP-IRL is able to outperform state-of-the-art methods, with as much as a 12 times speedup for large dimensional problems (10^6 states), without compromising the quality of the recovered reward function.
BibTeX
@mastersthesis{Sharma-2018-105993,author = {Mohit Sharma},
title = {Inverse Reinforcement Learning with Conditional Choice Probabilities},
year = {2018},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-18-23},
keywords = {Inverse Reinforcement Learning, IRL, RL, Imitation Learning, IL, CCP, Conditional Choice Probability, CCP-IRL, CCPIRL, Conditional Choice Probability Inverse Reinforcement Learning},
}