“Unfair Offers Seem Less Unfair When It is From a Non-human Agent” by Aaron Garvey, TaeWoo Kim and Adam Duhachek

“Unfair Offers Seem Less Unfair When It is From a Non-human Agent” by Aaron Garvey, TaeWoo Kim and Adam Duhachek

From an economic standpoint, the rational decision in an ultimatum game is to always accept any positive offer because a small monetary reward is better than nothing. However, numerous studies have shown that a concern for fairness is deeply embedded in the human mind and leads people to reject unequal offers that seem unfair (e.g., $10 offered for the self and $90 for the other). The current research proposes a new framework which introduces a novel agent—namely, a non-human, artificial agent. We hypothesize that individual willingness to accept an unfair offer will increase if the offer is made by an artificial agent (vs. human) agent, as fundamental differences in the perceived motives of human and non-human agents attenuate fairness concerns. In support of this hypothesis, we show that, in varying contexts and levels of unfairness, individuals are more willing to accept unfair offers from artificial agents (Studies 1 and 2). To explain our findings, we examine several potential underlying mechanisms, including the following: decreased perception of exploitation intentions, decreased perception of intentionality of an action (i.e., in the context of an algorithm-based offer), cognitive (vs. emotional) reactions to the offer. Additionally, we also examine whether the extent of an artificial agent’s anthropomorphism moderates our findings.

Leave a Reply

Your email address will not be published. Required fields are marked *