Carnegie Mellon University
2:30 pm to 3:30 pm
Gates Hillman Center 4405
Abstract:
Motion planning has achieved a great success in many robotic applications but still suffers in the real world under ample uncertainty. For example, manipulation involves interaction with unstructured and stochastic environments, which results in motion uncertainty. Perception that provides understanding of the environment is also not perfect, which in turn leads to sensing uncertainty. Planning for a robust motion under such uncertainty, or belief space planning in short, is a crucial capability for a robot to properly function in the real world. This problem can be formulated in a principled form called a Partially Observable Markov Decision Process (POMDP). However, solving a POMDP is often intractable due to the curse of dimensionality and the curse of history, i.e., exponential complexity with the number of states and the planning horizon.
In this work, we propose a search-based robust motion planning framework under motion and sensing uncertainty. There are several key contributions as follows. Firstly, we introduce a novel belief space planning algorithm that leverages multiple heuristics to guide the forward search. The planner uses multiple heuristics simultaneously for its efficient exploration of the belief space. This allows us to employ any informative domain-specific knowledge but still without full liability. It is particularly effective when solving Goal POMDPs in complex environments with infinite horizons. Secondly, we exploit a belief graph construction technique that utilizes local controllers. In continuous belief space there is a rare chance that two evolved belief states coincide exactly so that they form a single node on the belief graph. However, a local feedback controller can bring and merge them into one regardless of their evolution histories, which effectively alleviates the curse of history. In addition, we also suggest interesting extensions of this framework: Belief Experience-Graphs for bootstrapping with the previous experiences or demonstrations, and online-offline POMDP solver combination for enhanced scalability.
We validate our proposed framework through simulation and robot experiments. One of the experiments is parts assembly using PR2. Imperfect perception in the real world introduces uncertainty in the poses of the objects, but our approach provides a robust motion plan that can get rid of the relative pose uncertainty for successful assembly. As an example of larger POMDP problems, we apply the online replanning algorithm to a rover navigation problem in continuous space and show appealing results thanks to the online-offline combination approach. We also propose to extend this framework to a highly complex mobile manipulation problem in the real world.
Thesis Committee Members:
Maxim Likhachev, Chair
Matthew Mason
Oliver Kroemer
Ali-akbar Agha-mohamaddi, JPL/Caltech