Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data - Robotics Institute Carnegie Mellon University

Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data

Xuehan Xiong, Antonio Adan Oliver, Burcu Akinci, and Daniel Huber
Journal Article, Automation in Construction, Vol. 31, pp. 325 - 337, May, 2013

Abstract

In the Architecture, Engineering, and Construction (AEC) domain, semantically rich 3D information models are increasingly used throughout a facility's life cycle for diverse applications, such as planning renovations, space usage planning, and managing building maintenance. These models, which are known as building information models (BIMs), are often constructed using dense, three dimensional (3D) point measurements obtained from laser scanners. Laser scanners can rapidly capture the "as-is" conditions of a facility, which may differ significantly from the design drawings. Currently, the conversion from laser scan data to BIM is primarily a manual operation, and it is labor-intensive and can be error-prone. This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a facility into a compact, semantically rich information model. Our algorithm is capable of identifying and modeling the main visible structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments. Our method begins by extracting planar patches from a voxelized version of the input point cloud. The algorithm learns the unique features of different types of surfaces and the contextual relationships between them and uses this knowledge to automatically label patches as walls, ceilings, or floors. Then, we perform a detailed analysis of the recognized surfaces to locate openings, such as windows and doorways. This process uses visibility reasoning to fuse measurements from different scan locations and to identify occluded regions and holes in the surface. Next, we use a learning algorithm to intelligently estimate the shape of window and doorway openings even when partially occluded. Finally, occluded surface regions are filled in using a 3D inpainting algorithm. We evaluated the method on a large, highly cluttered data set of a building with forty separate rooms.

Notes
DOI: 10.1016/j.autcon.2012.10.006

BibTeX

@article{Xiong-2013-7692,
author = {Xuehan Xiong and Antonio Adan Oliver and Burcu Akinci and Daniel Huber},
title = {Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data},
journal = {Automation in Construction},
year = {2013},
month = {May},
volume = {31},
pages = {325 - 337},
keywords = {interior modeling, 3D modeling, scan to BIM, lidar, object recognition, wall analysis, opening detection},
}