Lyapunov Barrier Policy Optimization
Abstract
Deploying Reinforcement Learning (RL) agents in the real-world require that the agents satisfy safety constraints. Current RL agents explore the environment without considering these constraints, which can lead to damage to the hardware or even other agents in the environment. We propose a new method, LBPO, that uses a Lyapunov-based barrier function to restrict the policy update to a safe set for each training iteration. Our method also allows the user to control the conservativeness of the agent with respect to the constraints in the environment.
LBPO significantly outperforms state-of-the-art baselines in terms of the number of constraint violations during training while being competitive in terms of performance. Further, our analysis reveals that baselines like CPO and SDDPG rely mostly on backtracking to ensure safety rather than safe projection, which provides insight into why previous methods might not have effectively limit the number of constraint violations.
BibTeX
@workshop{Sikchi-2020-125597,author = {Harshit Sikchi and Wenxuan Zhou and David Held},
title = {Lyapunov Barrier Policy Optimization},
booktitle = {Proceedings of NeurIPS '20 Deep Reinforcement Learning Workshop},
year = {2020},
month = {November},
keywords = {Safety, Reinforcement Learning, Model free Reinforcement Learning, Optimization},
}