Title: Incorporating Semantic Structure in SLAM
Abstract:
For robots to understand the environment they interact with, a combination of geometric information and semantic information is imperative. In this talk, I propose a fast and scalable Simultaneous Localization and Mapping (SLAM) system that represents indoor scenes as a graph of semantic objects. Leveraging the observation that artificial environments are structured and occupied by recognizable objects, we show that a combination of compositional rendering and sparse volumetric object graph as the map results in a SLAM system suitable for drift-free large-scale indoor reconstruction. While object-based SLAM has been proposed in the past, we remove the need for a prior 3D object model and improve the online performance. We also propose a semantically assisted data association method that results in unambiguous and persistent object landmarks. We deliver an online implementation that can run at about 4-5Hz on a single commodity graphics card, and provide a comprehensive evaluation against state-of-the-art baselines.
Committee:
Prof. Michael Kaess (advisor)
Prof. Katerina Fragkiadaki
Chaoyang Wang
Location (Zoom): https://cmu.zoom.us/j/98181191759?pwd=azQ0eFNCQ2xjcEw2Z1Yzc3FFNTJPZz09 Meeting ID: 981 8119 1759 Passcode: 935287