Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach - Robotics Institute Carnegie Mellon University

Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

Chris Atkeson and Jun Morimoto
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 1643 - 1650, December, 2002

Abstract

A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor noise.

BibTeX

@conference{Atkeson-2002-16864,
author = {Chris Atkeson and Jun Morimoto},
title = {Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2002},
month = {December},
pages = {1643 - 1650},
}