Learning for Perception and Strategy: Adaptive Omnidirectional Stereo Vision and Tactical Reinforcement Learning - Robotics Institute Carnegie Mellon University
Loading Events

MSR Thesis Defense

August

5
Mon
Connor Pulling MSR Student Robotics Institute,
Carnegie Mellon University
Monday, August 5
12:15 pm to 1:15 pm
Newell-Simon Hall 4305
Learning for Perception and Strategy: Adaptive Omnidirectional Stereo Vision and Tactical Reinforcement Learning
Abstract:
Multi-view stereo omnidirectional distance estimation usually needs to build a cost volume with many hypothetical distance candidates. The cost volume building process is often computationally heavy considering the limited resources a mobile robot has. We propose a new geometry-informed way of distance candidates selection method which enables the use of a very small number of candidates and reduces the computational cost. We demonstrate the use of the geometry-informed candidates in a set of model variants. We find that by adjusting the candidates during robot deployment, our geometry-informed distance candidates also improve a pre-trained model’s accuracy if the extrinsics or the number of cameras changes. Without any re-training or fine-tuning, our models outperform models trained with evenly distributed distance candidates. Models are also released as hardware-accelerated versions with a new dedicated large-scale dataset. The project page, code, and dataset can be found at https://theairlab.org/gicandidates/.
Additionally, the field of reinforcement learning (RL) has transformed strategic game play, enabling AI agents to achieve superhuman performance in games like chess, Go, and StarCraft. These advancements underscore the potential of RL in handling complex, long-horizon planning tasks against intelligent adversaries with a large search space of potential winning strategies. This project introduces a new competitive multi-phasic strategy game with partial observability and specialized units, demonstrating the use of RL to achieve winning performance. Additionally, this project explores the dynamics of the new competitive strategy game, how certain mechanics lead to different dominant strategies, and how to properly incentivize RL agents to learn winning strategies in this environment.

Committee:
Sebastian Scherer (advisor)
Jeff Schneider
Cherie Ho