Effect of Human Biases on Human Agent Teams
Abstract
As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performance for different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.
BibTeX
@conference{Paruchuri-2010-10520,author = {Praveen Paruchuri and Pradeep R. Varakantham and Katia Sycara and Paul Scerri},
title = {Effect of Human Biases on Human Agent Teams},
booktitle = {Proceedings of IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI - IAT '10)},
year = {2010},
month = {August},
volume = {2},
pages = {327 - 334},
}