RESUMO
As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of human-like agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
Assuntos
Inteligência Artificial , Comportamento Cooperativo , Dilema do Prisioneiro , Humanos , Teoria dos JogosRESUMO
Two of the main factors shaping an individual's opinion are social coordination and personal preferences, or personal biases. To understand the role of those and that of the topology of the network of interactions, we study an extension of the voter model proposed by Masuda and Redner (2011), where the agents are divided into two populations with opposite preferences. We consider a modular graph with two communities that reflect the bias assignment, modeling the phenomenon of epistemic bubbles. We analyze the models by approximate analytical methods and by simulations. Depending on the network and the biases' strengths, the system can either reach a consensus or a polarized state, in which the two populations stabilize to different average opinions. The modular structure generally has the effect of increasing both the degree of polarization and its range in the space of parameters. When the difference in the bias strengths between the populations is large, the success of the very committed group in imposing its preferred opinion onto the other one depends largely on the level of segregation of the latter population, while the dependency on the topological structure of the former is negligible. We compare the simple mean-field approach with the pair approximation and test the goodness of the mean-field predictions on a real network.