Title: Explainability in navigation policies
Abstract:
Today’s autonomous agents have improved performance with learning and planning algorithms, but the applicability of such agents in the human-inhabited world is confined. Humans find it hard to explain the model’s decision-making and thus, may not trust it as a teammate. While working with a machine learning model that makes predictions or decides navigation action, it is imperative to estimate when the model is likely to succeed or fail and what factors influence a particular decision.
We took inspiration from how humans explain. Often a few salient observations dominate the decision-making process. The listener and the explainer tend to have a pre-existing shared knowledge to understand each other efficiently. Explainability also comes by design with modular hierarchical components in the decision-making. In this work, we approach explainability in AI agent’s policies across these three directions. First, we visualize the input feature importance of trained policies for control and navigation tasks. Second, we utilize language priors as a piece of common background knowledge to develop algorithms to navigate urban households and search and rescue scenarios. Third, we investigate modular architecture for learning exploration policies and compare its performance with heuristic path planning approaches.
Committee:
Katia Sycara (advisor)
Yonatan Bisk
David Held
Wenhao Luo
Location / ZOOM Link: https://cmu.zoom.us/j/97689106973?pwd=WWNyL1VlcFhOS1lRVi9mLzBRNXVVdz09
Meeting ID: 976 8910 6973
Passcode: 715203