Deep Interpretable Models of Theory of Mind - Robotics Institute Carnegie Mellon University

Deep Interpretable Models of Theory of Mind

Ini Oguntola, Dana Hughes, and Katia Sycara
Conference Paper, Proceedings of 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN '21), pp. 657 - 664, August, 2021

Abstract

When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. To this end, we develop an interpretable modular neural framework for modeling the intentions of other observed entities. We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft, and show that incorporating interpretability can significantly increase predictive performance under the right conditions.

BibTeX

@conference{Oguntola-2021-129459,
author = {Ini Oguntola and Dana Hughes and Katia Sycara},
title = {Deep Interpretable Models of Theory of Mind},
booktitle = {Proceedings of 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN '21)},
year = {2021},
month = {August},
pages = {657 - 664},
}