Deep visual perception for dynamic walking on discrete terrain - Robotics Institute Carnegie Mellon University

Deep visual perception for dynamic walking on discrete terrain

Avinash Siravuru, Allan Wang, Quan Nguyen, and Koushil Sreenath
Conference Paper, Proceedings of IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids '17), pp. 418 - 424, November, 2017

Abstract

Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of [45 : 85] centimeters.

BibTeX

@conference{Siravuru-2017-126792,
author = {Avinash Siravuru and Allan Wang and Quan Nguyen and Koushil Sreenath},
title = {Deep visual perception for dynamic walking on discrete terrain},
booktitle = {Proceedings of IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids '17)},
year = {2017},
month = {November},
pages = {418 - 424},
}