RESUMO
Ingroup favoritism and intergroup discrimination can be mutually reinforcing during social interaction, threatening intergroup cooperation and the sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant's awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to withhold cooperation from ingroup agents and reinforced ingroup favoritism among humans. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and improve intergroup cooperation. Thus, reputation is important for designing nudge agents.