Robust Task-based Control Policies for Physics-based Characters - Robotics Institute Carnegie Mellon University

Robust Task-based Control Policies for Physics-based Characters

Stelian Coros, Philippe Beaudoin, and Michiel van de Panne
Conference Paper, Proceedings of ACM Transactions on Graphics (TOG), December, 2009

Abstract

We present a method for precomputing robust task-based control policies for physically simulated characters. This allows for characters that can demonstrate skill and purpose in completing a given task, such as walking to a target location, while physically interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware, step-based controllers. A novel constrained state exploration phase is first used to define a character dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning. The final policy spans the cross-product of the character state and task state, and is more robust than the conrollers it is constructed from. We demonstrate real-time results for six locomotion-based tasks and on three highly-varied bipedal characters. We further provide a game-scenario demonstration.

Notes
Project Page

BibTeX

@conference{Coros-2009-17078,
author = {Stelian Coros and Philippe Beaudoin and Michiel van de Panne},
title = {Robust Task-based Control Policies for Physics-based Characters},
booktitle = {Proceedings of ACM Transactions on Graphics (TOG)},
year = {2009},
month = {December},
}