Social decisions and fairness change when people’s interests are represented by autonomous agents

de Melo, C. M., Marsella, S. and Gratch, J. (2018) Social decisions and fairness change when people’s interests are represented by autonomous agents. Autonomous Agents and Multi-Agent Systems, 32(1), pp. 163-187. (doi:10.1007/s10458-017-9376-6)

[img]
Preview
Text
178389.pdf - Accepted Version

768kB

Abstract

There has been growing interest on agents that represent people’s interests or act on their behalf such as automated negotiators, self-driving cars, or drones. Even though people will interact often with others via these agent representatives, little is known about whether people’s behavior changes when acting through these agents, when compared to direct interaction with others. Here we show that people’s decisions will change in important ways because of these agents; specifically, we showed that interacting via agents is likely to lead people to behave more fairly, when compared to direct interaction with others. We argue this occurs because programming an agent leads people to adopt a broader perspective, consider the other side’s position, and rely on social norms—such as fairness—to guide their decision making. To support this argument, we present four experiments: in Experiment 1 we show that people made fairer offers in the ultimatum and impunity games when interacting via agent representatives, when compared to direct interaction; in Experiment 2, participants were less likely to accept unfair offers in these games when agent representatives were involved; in Experiment 3, we show that the act of thinking about the decisions ahead of time—i.e., under the so-called “strategy method”—can also lead to increased fairness, even when no agents are involved; and, finally, in Experiment 4 we show that participants were less likely to reach an agreement with unfair counterparts in a negotiation setting. We discuss theoretical implications for our understanding of the nature of people’s social behavior with agent representatives, as well as practical implications for the design of agents that have the potential to increase fairness in society.

Item Type:Articles
Additional Information:This work is supported by the National Science Foundation, under Grant BCS-1419621, and the Air Force Office of Scientific Research, under Grant FA9550-14-1-0364.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Marsella, Professor Stacy
Authors: de Melo, C. M., Marsella, S., and Gratch, J.
College/School:College of Medical Veterinary and Life Sciences > Institute of Neuroscience and Psychology
Journal Name:Autonomous Agents and Multi-Agent Systems
Publisher:Springer
ISSN:1387-2532
ISSN (Online):1573-7454
Published Online:19 July 2017
Copyright Holders:Copyright © 2017 The Authors
First Published:First published in Autonomous Agents and Multi-Agent Systems 32(1): 163-187
Publisher Policy:Reproduced in accordance with the publisher copyright policy

University Staff: Request a correction | Enlighten Editors: Update this record