Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning
Abstract
Humans are able to understand and perform complex tasks by strategically structuring tasks into incremental steps or sub-goals. For a robot attempting to learn to perform a sequential task with critical subgoal states, these subgoal states can provide a natural opportunity for interaction with a human expert. This paper analyzes the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework. The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states. These subgoal states defines a set of sub-tasks for the learning agent to complete in order to achieve the final goal. The learning agent queries for partial demonstrations corresponding to each sub-task as needed when the learning agent struggles with individual sub-task. The proposed Human Interactive IRL (HI-IRL) framework is evaluated on several discrete path-planning tasks. We demonstrate that subgoal-based interactive structuring of the learning task results in significantly more efficient learning, requiring only a fraction of the demonstration data needed for learning the underlying reward function with a baseline IRL model.
BibTeX
@conference{Pan-2018-109839,author = {Xinlei Pan and Eshed Ohn-Bar and Nicholas Rhinehart and Yan Xu and Yilin Shen and Kris M. Kitani},
title = {Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning},
booktitle = {Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, (AAMAS '18)},
year = {2018},
month = {July},
pages = {1380 - 1387},
}