1:30 pm to 2:30 pm
NSH 1305
Autonomous exploration in rich 3D environments requires the construction and maintenance of a representation derived from accumulated 3D observations. Volumetric models, which are commonly employed to enable joint reasoning about occupied and free space, scale poorly with the size of the environment. Techniques employed to mitigate this scaling include hierarchical discretization, learning local data summarizations and occupancy model function approximation. However, these approaches either impose an a priori environment discretization, which limits resolution adaption, or else require input data sparsification or restrictive modeling assumptions to account for the computational expense of directly learning the volumetric occupancy model. This thesis proposes to overcome these limitations through the use of Gaussian Mixture Models (GMMs) to learn generative models of point cloud data. By learning the density function describing the distribution of points in a measurement, it becomes possible to reconstruct occupancy representations at arbitrary resolutions. Furthermore, the scaling issues associated with large environments can be circumvented through the use of efficient local occupancy reconstruction. Specifically, numerical and approximate analytic techniques are presented for computing occupancy from GMMs, which enables point-wise occupancy reconstruction in a low-complexity manner. The proposed approach is evaluated in two complementary scenarios: firstly, as a method for generating arbitrary resolution occupancy maps through comparison against state of the art continuous occupancy techniques and, secondly, as a compact back-end for multi-robot information theoretic exploration in large scale, communication constrained environments. Results demonstrate the efficacy of the approach as a high fidelity, low-bandwidth 3D point cloud representational framework.
Committee:
Nathan Michael (Chair)
Artur W. Dubrawski
Michael Kaess
Wenhao Luo