Abstract:
Aerial robots are now widely employed in diverse applications, such as delivery, environmental monitoring, and especially aerial manipulation—the focus of this thesis. Aerial manipulation involves integrating robotic arms with drones to perform physical tasks remotely. This capability is particularly crucial for operations that are
either too dangerous or inaccessible for humans, such as high-altitude or hazardous environment tasks. To fully utilize the capacity of aerial manipulation, we need to address several technical challenges, including the vehicles’ resilience to disturbance, simultaneous pose and force control, precise contact modeling, learning complex behavioral policies, and torque provision at the end-effector for complex tasks (e.g. peg in the hole, screwing bolts). This thesis proposes a series of integrated technical solutions to improve the capabilities of aerial manipulators.
To enhance the vehicle’s resilience to disturbance and the simultaneous pose and force control, we develop a unified controller that seamlessly transitions between free-flight and in-contact phases without requiring a switch in control strategies by using a model predictive controller (MPPI) that accounts for contact in the dynamics model. We show this approach enhances control robustness when the aerial manipulator makes contact with objects and increases the vehicle’s resilience toward disturbances.
To refine the contact dynamics model of the manipulator during complex contact scenarios (e.g., screwing bolts, peg in the hole), we propose a data-driven approach to learn from data how different contact forces interact in complex scenarios. We present preliminary results that show the feasibility of using learned models to capture and predict contact dynamics effectively. To overcome the issues found in the end-to-end contact model learning (e.g., impulse-like and non-smooth force functions), we propose a new method that learns implicit signed distance functions (SDF) for contact dynamics and contact forces from these contact-optimized SDFs.
To further increase adaptability in aerial manipulation and enable adaptive behaviors that go beyond straightforward trajectories following (e.g., backing-up and re-attempt), we propose a novel application of reinforcement learning (RL) to develop a high-level trajectory controller policy that learns complex behaviors to complete the task and adapt to dynamic environments.
Lastly, we propose using reaction wheels to provide sufficient torque at the end-effector for manipulation tasks while maintaining system stability. We present the integrated dynamics of the aerial manipulator with the reaction wheel and show that the reaction wheel increases the rotational stability of the aerial vehicle while providing sufficient torque at the end-effector. For new control strategies for the integrated system, we propose and analyze several approaches to implement, including LQR and MPPI control of the integrated UAV and reaction wheel system. We outline the steps for testing and evaluation of the reaction wheel in simulations and real flight.
Thesis Committee Members:
Sebastian Scherer, Chair
Zachary Manchester
Wennie Tabib
Junyi Geng, Penn State University