Natural Language Explanations in Human-Collaborative Systems - Robotics Institute Carnegie Mellon University

Natural Language Explanations in Human-Collaborative Systems

Rosario Scalise, Stephanie Rosenthal, and Siddhartha Srinivasa
Conference Paper, Proceedings of Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI '17), pp. 377 - 378, March, 2017

Abstract

As autonomous systems and people collaborate more, it is evident that there is an increasing need for systems that are transparent and explicable. Especially in critical decisionmaking applications such as those employed in autonomous vehicles or in-home robotic eldercare, it is important for robots to be coherent and articulate what decisions they are making as well as why they arrived at such decisions. While research has suggested the need for explanations for years [7], there is an increasing interest in explaining machine learning and autonomous behavior. There have been contributions in making classification systems more intelligible (e.g., [6, 10]). In robotics, there has been work in enabling agents to explain task failures [3], to generate task plans that are optimized for explanation [13], and to explain why no plan can be found to begin with [2]. Additionally, there have been contributions towards enabling robots to summarize their experiences and generate natural language descriptions of them [11, 9]. These approaches place emphasis on allowing users to specify their preferences with respect to the level of detail they desire. We also argue that natural language communication is an appealing medium for articulating decision-making for several reasons. First, as robots are increasingly being used by non-expert average users rather than computer science experts, we should aim for interaction modalities that they are most comfortable with including language. Second, natural language affords us the ability to provide rich descriptions and explanations of often-complex robot behavior. However, the richness of natural language also means it can be challenging to generate “good” explanations. We are interested in developing approaches to generating and evaluating natural language explanations of robot behavior in order to improve human-robot collaboration.

BibTeX

@conference{Scalise-2017-122671,
author = {Rosario Scalise and Stephanie Rosenthal and Siddhartha Srinivasa},
title = {Natural Language Explanations in Human-Collaborative Systems},
booktitle = {Proceedings of Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI '17)},
year = {2017},
month = {March},
pages = {377 - 378},
}