Transparency in Deep Reinforcement Learning Networks - Robotics Institute Carnegie Mellon University
Loading Events

MSR Thesis Defense

July

27
Fri
Ramitha Sundar Robotics Institute,
Carnegie Mellon University
Friday, July 27
8:00 am to 5:00 pm
Transparency in Deep Reinforcement Learning Networks

In the recent years there has been a growing interest in the field of Explainability for machine learning models in general and deep learning in particular. This is because, deep learning based approaches have made tremendous progress in the field of computer vision, reinforcement learning, language related domains and are being increasingly used in application areas such as medicine and finance. But before we fully adopt these models, it is important to for us to understand the motivations behind network decisions.

In this particular work, we explore transparency in deep reinforcement learning networks. We focus on answering the question- why a particular decision was taken by a value based deep reinforcement learning agent and identify attributes in the input space that positively or negatively influence its future actions in a human interpretable manner. Particularly, we discuss an approach “object saliency” at length and demonstrate that it can be used as a simple and effective computational tool for this purpose. We compare and contrast it with existing saliency approaches using a quantitative measure, discuss results from a pilot human experiment to study intuitiveness of object saliency and show how object saliency can provide insights into differences in value function learned by different RL architectures or training approaches, that is not highlighted by existing methods. Finally, we show that it is possible to develop rule based textual descriptions of object saliency maps for easy interpretability by humans – which is difficult to do with existing approaches.

 

Committee :

Dr. Katia Sycara, Chair

Dr. Jean Hyaejin Oh

Wenhao Luo