Fast Human Detection for Indoor Mobile Robots Using Depth Images
Abstract
A human detection algorithm running on an indoor mobile robot has to address challenges including occlusions due to cluttered environments, changing backgrounds due to the robot's motion, and limited on-board computational resources. We introduce a fast human detection algorithm for mobile robots equipped with depth cameras. First, we segment the raw depth image using a graph-based segmentation algorithm. Next, we apply a set of parameterized heuristics to filter and merge the segmented regions to obtain a set of candidates. Finally, we compute a Histogram of Oriented Depth (HOD) descriptor for each candidate, and test for human presence with a linear SVM. We experimentally evaluate our approach on a publicly available dataset of humans in an open area as well as our own dataset of humans in a cluttered cafe environment. Our algorithm performs comparably well on a single CPU core against another HOD-based algorithm that runs on a GPU even when the number of training examples is decreased by half. We discuss the impact of the number of training examples on performance, and demonstrate that our approach is able to detect humans in different postures (e.g. standing, walking, sitting) and with occlusions.
BibTeX
@conference{Choi-2013-7762,author = {Benjamin Choi and Cetin Mericli and Joydeep Biswas and Manuela Veloso},
title = {Fast Human Detection for Indoor Mobile Robots Using Depth Images},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2013},
month = {May},
pages = {1108 - 1113},
keywords = {human detection},
}