Co-inference for Multi-modal Scene Analysis - Robotics Institute Carnegie Mellon University

Co-inference for Multi-modal Scene Analysis

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 668 - 681, October, 2012

Abstract

We address the problem of understanding scenes from multiple sources of sensor data (e.g., a camera and a laser scanner) in the case where there is no one-to-one correspondence across modalities (e.g., pixels and 3-D points). This is an important scenario that frequently arises in practice not only when two different types of sensors are used, but also when the sensors are not co-located and have different sampling rates. Previous work has addressed this problem by restricting interpretation to a single representation in one of the domains, with augmented features that attempt to encode the information from the other modalities. Instead, we propose to analyze all modalities simultaneously while propagating information across domains during the inference procedure. In addition to the immediate benefit of generating a complete interpretation in all of the modalities, we demonstrate that this co-inference approach also improves performance over the canonical approach.

BibTeX

@conference{Munoz-2012-7594,
author = {Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert},
title = {Co-inference for Multi-modal Scene Analysis},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2012},
month = {October},
pages = {668 - 681},
}