Spatial Reasoning and Semantic Representations for Intelligent Multi-Robot Exploration and Navigation - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

December

2
Mon
Seungchan Kim PhD Student Robotics Institute,
Carnegie Mellon University
Monday, December 2
3:30 pm to 5:00 pm
NSH 4305
Spatial Reasoning and Semantic Representations for Intelligent Multi-Robot Exploration and Navigation

Abstract:
Autonomous robot exploration is widely applied in areas such as search and rescue, environmental monitoring, and structural inspection. Multi-robot exploration has garnered significant attention in the robotics research community, as it enables faster task completion and greater coverage than a single robot can achieve. However, it presents unique challenges: behavior coordination is complex, communication constraints must be managed, and task allocation is non-trivial. To address these challenges, this thesis investigates multi-robot exploration through the lens of integrated perception and planning, focusing on how advanced perceptual reasoning and representations can help overcome these hurdles. Specifically, it examines how enhancing environmental understanding through spatial reasoning and semantic representations, integrating this understanding into planning, and sharing it across robots improves coordination, adaptability, and efficiency in exploration tasks.

The first half of this thesis focuses on spatial understanding, employing geometric analysis and pattern extraction to improve multi-robot exploration, particularly in indoor environments. The first completed work applies geometric cue extraction and space decomposition to multi-robot room-based exploration, demonstrating more sophisticated coordination. The second completed work utilizes map prediction to guide exploration strategies in single-robot settings. Building on this, the first proposed work extends map prediction to multi-robot scenarios, introducing approaches to address communication constraints through predictive mapping.

The second half shifts to semantic representations, drawing on recent advances in foundation models to build vision-language-aligned representations for richer contextual understanding and more complex tasks, particularly in outdoor environments. The second proposed work explores heterogeneous multi-robot task allocation for zero-shot semantic navigation using predefined task prompts. Finally, this thesis proposes the online construction of general semantic scene representations, enabling robots to perform complex language-based reasoning and planning.

Thesis Committee Members:
Sebastian Scherer, Chair
Yonatan Bisk
Wennie Tabib
Graeme Best, University of Technology Sydney
Micah Corah, Colorado School of Mines

More Information