Addressing reward bias in Adversarial Imitation Learning with neutral reward functions - Robotics Institute Carnegie Mellon University

Addressing reward bias in Adversarial Imitation Learning with neutral reward functions

Workshop Paper, NeurIPS '20 Workshop on Deep Reinforcement Learning, December, 2020

Abstract

Generative Adversarial Imitation Learning suffers from the fundamental problem of reward bias stemming from the choice of reward functions used in the algorithm. Different types of biases also affect different types of environments-which are broadly divided into survival and task-based environments. We provide a theoretical sketch of why existing reward functions would fail in imitation learning scenarios in task based environments with multiple terminal states. We also propose a new reward function for GAIL which outperforms existing GAIL methods on task based environments with single and multiple terminal states and effectively overcomes both survival and termination bias.

BibTeX

@workshop{Jena-2020-126864,
author = {Rohit Jena and Siddharth Agrawal and Katia Sycara},
title = {Addressing reward bias in Adversarial Imitation Learning with neutral reward functions},
booktitle = {Proceedings of NeurIPS '20 Workshop on Deep Reinforcement Learning},
year = {2020},
month = {December},
}