
Abstract:
Robots operating in safety-critical environments must reason under uncertainty and novel situations. However, recent advances in data-driven perception have made it challenging to provide formal safety guarantees, particularly when systems encounter out-of-distribution or previously unseen inputs. For such systems to be safely deployed in the real world, we need to incorporate safety considerations alongside performance objectives throughout the entire development pipeline. During training, data augmentation strategies that use transformations can allow the system to overcome and adapt to domain shift. Post-training validation for large systems is difficult because of high-dimensional inputs. To alleviate this we present AutoODD, a framework to utilize foundation models to audit smaller, more specialized learning models. In proposed work, we will be tackling the remaining pieces of the perception pipeline, namely data modeling, hazard response, and system security. This thesis explores how safety can be systematically integrated into learning-enabled perception systems across three key stages: data modeling and training, post-training, and deployment.
Thesis Committee Members:
Sebastian Scherer (Chair)
Jean Oh
Andrea Bacjsy
Rachel Luo (Nvidia)