Integrating Inductive Neural Network Learning and Explanation-Based Learning
Abstract
Many researchers have noted the importance of combining inductive and analytical learning, yet we still lack combined learning methods that are effective in practice. We present here a learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations. This method, called explanation-based neural network learning (EBNN), is based on a neural network representation of domain knowledge. Explanations are constructed by chaining together inferences from multiple neural networks. In contrast with symbolic approaches to explanation-based learning which extract weakest preconditions from the explanation, EBNN extracts the derivatives of the target concept with respect to the training example features. These derivatives summarize the dependencies within the explanation, and are used to bias the inductive learning of the target concept. Experimental results on a simulated robot control task show that EBNN requires significantly fewer training examples than standard inductive learning. Furthermore, the method is shown to be robust to errors in the domain theory, operating effectively over a broad spectrum from very strong to very weak domain theories.
BibTeX
@conference{Thrun-1993-15904,author = {Sebastian Thrun and Tom Mitchell},
title = {Integrating Inductive Neural Network Learning and Explanation-Based Learning},
booktitle = {Proceedings of 13th International Joint Conference on Artificial Intelligence (IJCAI '93)},
year = {1993},
month = {August},
editor = {R. Bajcsy},
volume = {2},
pages = {930 - 936},
publisher = {Morgan Kaufmann},
}