Many applications in computer graphics, computer vision, and robotics require accurate three-dimensional (3D) models of real-world objects. Current techniques for acquiring 3D models from reality require significant manual assistance or make assumptions about the scene characteristics or data collection procedure. The goal of this project is to fully automate the 3D modeling process without resorting to these restrictive assumptions. Given a set of unordered range images and no additional a priori information about the scene, our system automatically generates an accurate 3D reconstruction. Specifically, it is not necessary to know the relative pose between viewpoints or to indicate which views contain overlapping scene regions.
The automatic modeling system begins by registering all pairs of input views with no knowledge of the relative poses. The results are verified for consistency, but some incorrect matches may be locally undetectable and some correct matches may be missed. We then construct a consistent model from these potentially faulty matches using mixed continuous and discrete optimization algorithms and a global consistency criterion to eliminate incorrect, but locally consistent, matches. We demonstrate the utility of automatic modeling with an application called handheld modeling, in which a 3D model is automatically created from an object held in a person’s hand.