Behavioral considerations suggest an average reward TD model of the dopamine system
Journal Article, Neurocomputing, Vol. 32, pp. 679 - 684, June, 2000
Abstract
Recently there has been much interest in modeling the activity of primate midbrain dopamine neurons as signalling reward prediction error. But since the models are based on temporal-difference (TD) learning, they assume an exponential decline with time in the value of delayed reinforcers, an assumption long known to conflict with animal behavior. We show that a variant of TD learning that tracks variations in the average reward per timestep rather than cumulative discounted reward preserves the models’ success at explaining neurophysiological data while significantly increasing their applicability to behavioral data.
BibTeX
@article{Daw-2000-16746,author = {N. D. Daw and David S. Touretzky},
title = {Behavioral considerations suggest an average reward TD model of the dopamine system},
journal = {Neurocomputing},
year = {2000},
month = {June},
volume = {32},
pages = {679 - 684},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.