Physical Querying with Multi-Modal Sensing
Abstract
We present Marvin, a system that can search physical objects using a mobile or wearable device. It integrates HOG-based object recognition, SURF-based localization information, automatic speech recognition, and user feedback information with a probabilistic model to recognize the “object of interest” at high accuracy and at interactive speeds. Once the object of interest is recognized, the information that the user is querying, e.g. reviews, options, etc., is displayed on the user’s mobile or wearable device. We tested this prototype in a real-world retail store during business hours, with varied degree of background noise and clutter. We show that this multi-modal approach achieves superior recognition accuracy compared to using a vision system alone, especially in cluttered scenes where a vision system would be unable to distinguish which object is of interest to the user without additional input. It is computationally able to scale to large numbers of objects by focusing compute-intensive resources on the objects most likely to be of interest, inferred from user speech and implicit localization information. We present the system architecture, the probabilistic model that integrates the multi-modal information, and empirical results showing the benefits of multi-modal integration.
BibTeX
@conference{Baek-2014-7840,author = {Iljoo Baek and Taylor Stine and Denver Dash and Fanyi Xiao and Yaser Ajmal Sheikh and Yair Movshovitz-Attias and Mei Chen and Martial Hebert and Takeo Kanade},
title = {Physical Querying with Multi-Modal Sensing},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '14)},
year = {2014},
month = {March},
pages = {183 - 190},
}