Data-Driven Scene Understanding from 3D Models
Abstract
In this paper, we propose a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We develop a framework for auto-calibrating a camera, rendering 3D models from the viewpoint an image was taken, and computing a similarity measure between each 3D model and an input image. We demonstrate this data-driven approach in the context of geometry estimation and show the ability to find the identities and poses of object in a scene. Additionally, we present a new dataset with annotated scene geometry. This data allows us to measure the performance of our algorithm in 3D, rather than in the image plane.
BibTeX
@conference{Satkin-2012-7573,author = {Scott Satkin and Jason Lin and Martial Hebert},
title = {Data-Driven Scene Understanding from 3D Models},
booktitle = {Proceedings of British Machine Vision Conference (BMVC '12)},
year = {2012},
month = {September},
}