Enhanced Visual Scene Understanding through Human-Robot Dialog - Robotics Institute Carnegie Mellon University

Enhanced Visual Scene Understanding through Human-Robot Dialog

M. Johnson-Roberson, J. Bohg, D. Kragic, G. Skantze, J. Gustafson, and R. Carlson
Conference Paper, Proceedings of AAAI '10 Fall Symposium: Dialog with Robots, pp. 143 - 144, November, 2010

Abstract

In this paper, we propose a novel human-robot-interaction framework for the purpose of rapid visual scene understanding. The task of the robot is to correctly enumerate how many separate objects there are in the scene and to describe them in terms of their attributes. Our approach builds on top of a state-of-the-art 3D segmentation method segmenting stereo reconstructed point clouds into object hypotheses and combines it with a natural dialog system. By putting a `human in the loop', the robot gains knowledge about ambiguous situations beyond its own resolution. Specifically, we are introducing an entropy-based system to spot the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-to-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.

BibTeX

@conference{Johnson-Roberson-2010-130243,
author = {M. Johnson-Roberson and J. Bohg and D. Kragic and G. Skantze and J. Gustafson and R. Carlson},
title = {Enhanced Visual Scene Understanding through Human-Robot Dialog},
booktitle = {Proceedings of AAAI '10 Fall Symposium: Dialog with Robots},
year = {2010},
month = {November},
pages = {143 - 144},
}