Generalizing informed sampling for asymptotically-optimal sampling-based kinodynamic planning via markov chain monte carlo
Abstract
Asymptotically-optimal motion planners such as RRT* have been shown to incrementally approximate the shortest path between start and goal states. Once an initial solution is found, their performance can be dramatically improved by restricting subsequent samples to regions of the state space that can potentially improve the current solution. When the motion-planning problem lies in a Euclidean space, this region X
inf, called the informed set, can be sampled directly. However, when planning with differential constraints in non-Euclidean state spaces, no analytic solutions exists to sampling X inf directly. State-of-the-art approaches to sampling X inf in such domains such as Hierarchical Rejection Sampling (HRS) may still be slow in high -dimensional state space. This may cause the planning algorithm to spend most of its time trying to produces samples in X inf rather than explore it. In this paper, we suggest an alternative approach to produce samples in the informed set X inf for a wide range of settings. Our main insight is to recast this problem as one of sampling uniformly within the sub-level-set of an implicit non-convex function. This recasting enables us to apply Monte Carlo sampling methods, used very effectively in the Machine Learning and Optimization communities, to solve our problem. We show for a wide range of scenarios that using our sampler can accelerate the convergence rate to high-quality solutions in high-dimensional problems.
BibTeX
@conference{Yi-2018-122661,author = {Daqing Yi and Rohan Thakker and Cole Gulino and Oren Salzman and Siddhartha Srinivasa},
title = {Generalizing informed sampling for asymptotically-optimal sampling-based kinodynamic planning via markov chain monte carlo},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2018},
month = {May},
pages = {7063 - 7070},
}