Getting Optimization layers to play well with Deep Networks: Numerical methods and Architectures - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

November

12
Tue
Swaminathan Gurumurthy PhD Student Robotics Institute,
Carnegie Mellon University
Tuesday, November 12
3:00 pm to 4:30 pm
NSH 4305
Getting Optimization layers to play well with Deep Networks: Numerical methods and Architectures

Abstract:
Many real-world challenges, from robotic control to resource management, can be effectively formulated as optimization problems. Recent advancements have focused on incorporating these optimization problems as layers within deep learning pipelines, enabling the explicit inclusion of auxiliary constraints or cost functions, which is crucial for applications such as enforcing physical laws, ensuring safety constraints, and optimizing complex objectives. However, these layers introduce several challenges, including inference and representational inefficiencies, unstable/slow training dynamics, and modeling inaccuracies, which need to be addressed to fully harness their potential.

We systematically investigate these challenges and propose novel numerical methods and architectural solutions that mitigate them, making optimization layers more efficient and effective within deep learning pipelines. Our contributions include methods for enhancing computational efficiency, tackling issues of gradient bias and variance, and improving sample efficiency in reinforcement learning using approximate simulators. We demonstrate these contributions across different applications, ranging from input-optimization problems to SLAM/VO to reinforcement learning. We also present a new approach for visual-inertial navigation in nanosatellites, highlighting the practical benefits of integrating optimization layers in challenging real-world scenarios. Finally, we propose future work on extending our analysis and methods for improved representational fidelity and training stability to constrained optimization layers, using differentiable model predictive control layers as an example use case to illustrate their effectiveness.

Together, these contributions advance our understanding of the complexities and opportunities in integrating optimization layers within deep learning models, offering new frameworks and insights that improve efficiency, stability, and generalizability across a wide range of complex tasks.

Thesis Committee Members:
Zico Kolter, Co-chair
Zac Manchester, Co-chair
Geoffrey Gordon
Max Simchowitz
Vladlen Koltun, Apple

More Information