Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering - Robotics Institute Carnegie Mellon University

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Michael Nechyba and Yangsheng Xu
Conference Paper, Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '97), pp. 214 - 219, July, 1997

Abstract

Most neural networks used today rely on rigid, fixed-architecture networks and/or slow, gradient descent-based training algorithms (e.g. backpropagation). In this paper, we propose a new neural network learning architecture to counter these problems. Namely, we combine (1) flexible cascade neural networks, which dynamically adjust the size of the neural network as part of the learning process, and (2) node-decoupled extended Kalman filtering (NDEKF), a fast converging alternative to backpropagation. In this paper, we first summarize how learning proceeds in cascade neural networks. We then show how NDEKF fits seamlessly into the cascade learning framework, and how cascade learning addresses the poor local minima problem of NDEKF. We analyze the computational complexity of our approach and compare it to fixed-architecture training paradigms. Finally, we report learning results for continuous function approximation and dynamic system identification-results which show substantial improvement in learning speed and error convergence over other neural network training methods.

BibTeX

@conference{Nechyba-1997-14423,
author = {Michael Nechyba and Yangsheng Xu},
title = {Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering},
booktitle = {Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '97)},
year = {1997},
month = {July},
pages = {214 - 219},
}