Real-Time Implementation of Neural Network Learning Control in a Flexible Space Manipulator
Abstract
A neural network approach to online learning control and real-time implementation for a flexible space robot manipulator is presented. An overview of the motivation and system development of the self-mobile space modulator (SM/sup 2/) is given. The neural network learns control by updating feedforward dynamics based on feedback control input. Implementation issues associated with online training strategies are addressed and a single stochastic training scheme is presented. A recurrent neural network architecture with improved performance is proposed. Using the proposed learning scheme, the manipulator tracking error is reduced by 85% compared to that of conventional proportional-integral-derivative (PID) control. The approach possesses a high degree of generality and adaptability to various applications. It will be a valuable learning control method for robots working in unconstructed environments.
BibTeX
@conference{Newton-1993-13492,author = {R. T. Newton and Yangsheng Xu},
title = {Real-Time Implementation of Neural Network Learning Control in a Flexible Space Manipulator},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {1993},
month = {May},
volume = {1},
pages = {135 - 141},
}