3:00 pm to 12:00 am
Event Location: GHC 4405
Abstract: For robots to be deployable, robot systems need to operate reliably for long periods. Reliable operation requires robots to reason about events that impede task progression. Robots that consider failure recovery strategies are more dependable and robust to unforeseen events. Recovery strategies allow the robot to continue consistent task operation even under slight variations.
This thesis focuses on robot failure recovery. Our work focuses on failures that are resolved through planning rather than mechanical or electrical failures. The types of failures we address are those due to misinformation. We hypothesize there is not enough information available to plan in the highest dimensional space of the real world for the entire task. Therefore, it is necessary to use approximations. These approximations may lead to increased failures. For example, a robot may begin planning for its current task in a flat two-dimensional space. A failure occurs when the robot hits an overhang in three-dimensional space that it was not initially considering.Considering a model with the overhang height could have prevented the failure, yet the robot may not have known about the overhang in advance.
We present a failure recovery approach where the robot reasons about varying fidelity model spaces. Instead of precomputing recovery strategies for common failures, which are difficult to anticipate, we reason about a model hierarchy. In our approach, the continuous planning space is discretized into layers of model fidelity, where higher-fidelity models more closely approximate the real world. These models are assumed to be given, and are organized as a directed model hierarchy. The robot begins planning in a low-fidelity model space most applicable to its situation. The model selection stage leverages the hierarchy to decide which model is most applicable at a particular point in a given task to circumvent failure.
Switching delays reasoning about higher fidelity models to areas where the robot has the most certainty. What model to switch to is reasoned about as more information becomes available during execution. In this paradigm, model complexity increases, when necessary, around failure locations. Additionally, the ability to switch gives similar success rates to planning in the full intractable space with the computation savings of more abstract models.
Previous work presents a simple strategy that uses a model hierarchy to recover from failure for a navigation problem. Our proposed work expands on model representation in the hierarchy and model selection strategies. We investigate how uncertainty can be captured in models, in addition to how the models are organized. We also would like to look at more informed approaches to searching the model graph for selection. Lastly, we propose to reason about how different models can affect how risky a robot behaves when planning under task constraints.
Committee:Reid Simmons, Chair
Siddhartha Srinivasa
Maxim Likhachev
Kanna Rajan, University of Porto, Portugal