Lifelong Robot Learning - Robotics Institute Carnegie Mellon University

Lifelong Robot Learning

Sebastian Thrun and Tom Mitchell
Journal Article, Robotics and Autonomous Systems, Vol. 15, No. 1, pp. 25 - 46, July, 1995

Abstract

Learning provides a useful tool for the automatic design of autonomous robots. Recent research on learning robot control has predominantly focused on learning single tasks that were studied in isolation. If robots encounter a multitude of control learning tasks over their entire lifetime there is an opportunity to transfer knowledge between them. In order to do so, robots may learn the invariants and the regularities of the individual tasks and environments. This task-independent knowledge can be employed to bias generalization when learning control, which reduces the need for real-world experimentation. We argue that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios. Two approaches to lifelong robot learning which both capture invariant knowledge about the robot and its environments are presented. Both approaches have been evaluated using a HERO-2000 mobile robot. Learning tasks included navigation in unknown indoor environments and a simple find-and-fetch task.

BibTeX

@article{Thrun-1995-16134,
author = {Sebastian Thrun and Tom Mitchell},
title = {Lifelong Robot Learning},
journal = {Robotics and Autonomous Systems},
year = {1995},
month = {July},
volume = {15},
number = {1},
pages = {25 - 46},
}