Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
2.
Dyn Games Appl ; : 1-20, 2023 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-37361929

RESUMO

Humans have developed considerable machinery used at scale to create policies and to distribute incentives, yet we are forever seeking ways in which to improve upon these, our institutions. Especially when funding is limited, it is imperative to optimise spending without sacrificing positive outcomes, a challenge which has often been approached within several areas of social, life and engineering sciences. These studies often neglect the availability of information, cost restraints or the underlying complex network structures, which define real-world populations. Here, we have extended these models, including the aforementioned concerns, but also tested the robustness of their findings to stochastic social learning paradigms. Akin to real-world decisions on how best to distribute endowments, we study several incentive schemes, which consider information about the overall population, local neighbourhoods or the level of influence which a cooperative node has in the network, selectively rewarding cooperative behaviour if certain criteria are met. Following a transition towards a more realistic network setting and stochastic behavioural update rule, we found that carelessly promoting cooperators can often lead to their downfall in socially diverse settings. These emergent cyclic patterns not only damage cooperation, but also decimate the budgets of external investors. Our findings highlight the complexity of designing effective and cogent investment policies in socially diverse populations.

3.
R Soc Open Sci ; 9(5): 212000, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35582657

RESUMO

We present an evolutionary game model that integrates the concept of tags, trust and migration to study how trust in social and physical groups influence cooperation and migration decisions. All agents have a tag, and they gain or lose trust in other tags as they interact with other agents. This trust in different tags determines their trust in other players and groups. In contrast to other models in the literature, our model does not use tags to determine the cooperation/defection decisions of the agents, but rather their migration decisions. Agents decide whether to cooperate or defect based purely on social learning (i.e. imitation from others). Agents use information about tags and their trust in tags to determine how much they trust a particular group of agents and whether they want to migrate to that group. Comprehensive experiments show that the model can promote high levels of cooperation and trust under different game scenarios, and that curbing the migration decisions of agents can negatively impact both cooperation and trust in the system. We also observed that trust becomes scarce in the system as the diversity of tags increases. This work is one of the first to study the impact of tags on trust in the system and migration behaviour of the agents using evolutionary game theory.

4.
J R Soc Interface ; 19(188): 20220036, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35317650

RESUMO

Both conventional wisdom and empirical evidence suggest that arranging a prior commitment or agreement before an interaction takes place enhances the chance of reaching mutual cooperation. Yet it is not clear what mechanisms might underlie the participation in and compliance with such a commitment, especially when participation is costly and non-compliance can be profitable. Here, we develop a theory of participation and compliance with respect to an explicit commitment formation process and to institutional incentives where individuals, at first, decide whether or not to join a cooperative agreement to play a one-shot social dilemma game. Using a mathematical model, we determine whether and when participating in a costly commitment, and complying with it, is an evolutionarily stable strategy, resulting in high levels of cooperation. We show that, given a sufficient budget for providing incentives, rewarding of commitment compliant behaviours better promotes cooperation than punishment of non-compliant ones. Moreover, by sparing part of this budget for rewarding those willing to participate in a commitment, the overall level of cooperation can be significantly enhanced for both reward and punishment. Finally, the presence of mistakes in deciding to participate favours evolutionary stability of commitment compliance and cooperation.


Assuntos
Teoria dos Jogos , Motivação , Comportamento Cooperativo , Humanos , Punição , Recompensa
5.
Sci Rep ; 12(1): 1723, 2022 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-35110627

RESUMO

Regulation of advanced technologies such as Artificial Intelligence (AI) has become increasingly important, given the associated risks and apparent ethical issues. With the great benefits promised from being able to first supply such technologies, safety precautions and societal consequences might be ignored or shortchanged in exchange for speeding up the development, therefore engendering a racing narrative among the developers. Starting from a game-theoretical model describing an idealised technology race in a fully connected world of players, here we investigate how different interaction structures among race participants can alter collective choices and requirements for regulatory actions. Our findings indicate that, when participants portray a strong diversity in terms of connections and peer-influence (e.g., when scale-free networks shape interactions among parties), the conflicts that exist in homogeneous settings are significantly reduced, thereby lessening the need for regulatory actions. Furthermore, our results suggest that technology governance and regulation may profit from the world's patent heterogeneity and inequality among firms and nations, so as to enable the design and implementation of meticulous interventions on a minority of participants, which is capable of influencing an entire population towards an ethical and sustainable use of advanced technologies.

6.
Sci Rep ; 11(1): 23581, 2021 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-34880264

RESUMO

Moral rules allow humans to cooperate by indirect reciprocity. Yet, it is not clear which moral rules best implement indirect reciprocity and are favoured by natural selection. Previous studies either considered only public assessment, where individuals are deemed good or bad by all others, or compared a subset of possible strategies. Here we fill this gap by identifying which rules are evolutionary stable strategies (ESS) among all possible moral rules while considering private assessment. We develop an analytical model describing the frequency of long-term cooperation, determining when a strategy can be invaded by another. We show that there are numerous ESSs in absence of errors, which however cease to exist when errors are present. We identify the underlying properties of cooperative ESSs. Overall, this paper provides a first exhaustive evolutionary invasion analysis of moral rules considering private assessment. Moreover, this model is extendable to incorporate higher-order rules and other processes.


Assuntos
Comportamento Cooperativo , Princípios Morais , Evolução Biológica , Humanos , Relações Interpessoais , Seleção Genética/fisiologia
7.
PLoS One ; 16(1): e0244592, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33497424

RESUMO

The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to "win". Starting from a baseline model that describes a broad class of technology races where winners draw a significant benefit compared to others (such as AI advances, patent race, pharmaceutical technologies), we investigate here how positive (rewards) and negative (punishments) incentives may beneficially influence the outcomes. We uncover conditions in which punishment is either capable of reducing the development speed of unsafe participants or has the capacity to reduce innovation through over-regulation. Alternatively, we show that, in several scenarios, rewarding those that follow safety measures may increase the development speed while ensuring safe choices. Moreover, in the latter regimes, rewards do not suffer from the issue of over-regulation as is the case for punishment. Overall, our findings provide valuable insights into the nature and kinds of regulatory actions most suitable to improve safety compliance in the contexts of both smooth and sudden technological shifts.


Assuntos
Inteligência Artificial , Criatividade , Humanos , Motivação , Punição , Recompensa , Tecnologia
8.
Proc Math Phys Eng Sci ; 477(2254): 20210568, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35153590

RESUMO

Institutions can provide incentives to enhance cooperation in a population where this behaviour is infrequent. This process is costly, and it is thus important to optimize the overall spending. This problem can be mathematically formulated as a multi-objective optimization problem where one wishes to minimize the cost of providing incentives while ensuring a minimum level of cooperation, sustained over time. Prior works that consider this question usually omit the stochastic effects that drive population dynamics. In this paper, we provide a rigorous analysis of this optimization problem, in a finite population and stochastic setting, studying both pairwise and multi-player cooperation dilemmas. We prove the regularity of the cost functions for providing incentives over time, characterize their asymptotic limits (infinite population size, weak selection and large selection) and show exactly when reward or punishment is more cost efficient. We show that these cost functions exhibit a phase transition phenomenon when the intensity of selection varies. By determining the critical threshold of this phase transition, we provide exact calculations for the optimal cost of the incentive, for any given intensity of selection. Numerical simulations are also provided to demonstrate analytical observations. Overall, our analysis provides for the first time a selection-dependent calculation of the optimal cost of institutional incentives (for both reward and punishment) that guarantees a minimum level of cooperation over time. It is of crucial importance for real-world applications of institutional incentives since the intensity of selection is often found to be non-extreme and specific for a given population.

9.
Entropy (Basel) ; 24(1)2021 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-35052036

RESUMO

We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are "blind" to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.

10.
J Math Biol ; 78(1-2): 331-371, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30069646

RESUMO

The analysis of equilibrium points is of great importance in evolutionary game theory with numerous practical ramifications in ecology, population genetics, social sciences, economics and computer science. In contrast to previous analytical approaches which primarily focus on computing the expected number of internal equilibria, in this paper we study the distribution of the number of internal equilibria in a multi-player two-strategy random evolutionary game. We derive for the first time a closed formula for the probability that the game has a certain number of internal equilibria, for both normal and uniform distributions of the game payoff entries. In addition, using Descartes' rule of signs and combinatorial methods, we provide several universal upper and lower bound estimates for this probability, which are independent of the underlying payoff distribution. We also compare our analytical results with those obtained from extensive numerical simulations. Many results of this paper are applicable to a wider class of random polynomials that are not necessarily from evolutionary games.


Assuntos
Evolução Biológica , Teoria dos Jogos , Modelos Biológicos , Algoritmos , Biologia Computacional , Simulação por Computador , Humanos , Conceitos Matemáticos , Probabilidade
11.
Sci Rep ; 8(1): 15997, 2018 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-30375463

RESUMO

The problem of promoting the evolution of cooperative behaviour within populations of self-regarding individuals has been intensively investigated across diverse fields of behavioural, social and computational sciences. In most studies, cooperation is assumed to emerge from the combined actions of participating individuals within the populations, without taking into account the possibility of external interference and how it can be performed in a cost-efficient way. Here, we bridge this gap by studying a cost-efficient interference model based on evolutionary game theory, where an exogenous decision-maker aims to ensure high levels of cooperation from a population of individuals playing the one-shot Prisoner's Dilemma, at a minimal cost. We derive analytical conditions for which an interference scheme or strategy can guarantee a given level of cooperation while at the same time minimising the total cost of investment (for rewarding cooperative behaviours), and show that the results are highly sensitive to the intensity of selection by interference. Interestingly, we show that a simple class of interference that makes investment decisions based on the population composition can lead to significantly more cost-efficient outcomes than standard institutional incentive strategies, especially in the case of weak selection.


Assuntos
Comportamento Cooperativo , Relações Interpessoais , Modelos Teóricos , Algoritmos , Análise Custo-Benefício , Humanos , Comportamento Social
12.
Sci Rep ; 7(1): 2478, 2017 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-28559538

RESUMO

Agreements and commitments have provided a novel mechanism to promote cooperation in social dilemmas in both one-shot and repeated games. Individuals requesting others to commit to cooperate (proposers) incur a cost, while their co-players are not necessarily required to pay any, allowing them to free-ride on the proposal investment cost (acceptors). Although there is a clear complementarity in these behaviours, no dynamic evidence is currently available that proves that they coexist in different forms of commitment creation. Using a stochastic evolutionary model allowing for mixed population states, we identify non-trivial roles of acceptors as well as the importance of intention recognition in commitments. In the one-shot prisoner's dilemma, alliances between proposers and acceptors are necessary to isolate defectors when proposers do not know the acceptance intentions of the others. However, when the intentions are clear beforehand, the proposers can emerge by themselves. In repeated games with noise, the incapacity of proposers and acceptors to set up alliances makes the emergence of the first harder whenever the latter are present. As a result, acceptors will exploit proposers and take over the population when an apology-forgiveness mechanism with too low apology cost is introduced, and hence reduce the overall cooperation level.


Assuntos
Evolução Biológica , Comportamento Cooperativo , Perdão/fisiologia , Relações Interpessoais , Teoria dos Jogos , Humanos , Investimentos em Saúde , Conhecimento , Modelos Teóricos , Dilema do Prisioneiro
13.
J Math Biol ; 73(6-7): 1727-1760, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27107868

RESUMO

In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.


Assuntos
Evolução Biológica , Teoria dos Jogos , Modelos Estatísticos , Algoritmos , Simulação por Computador , Comportamento Cooperativo
15.
Sci Rep ; 5: 10639, 2015 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-26057819

RESUMO

Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas.


Assuntos
Comportamento Cooperativo , Perdão , Humanos , Relações Interpessoais
16.
Sci Rep ; 5: 9312, 2015 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-25791431

RESUMO

Commitments have been shown to promote cooperation if, on the one hand, they can be sufficiently enforced, and on the other hand, the cost of arranging them is justified with respect to the benefits of cooperation. When either of these constraints is not met it leads to the prevalence of commitment free-riders, such as those who commit only when someone else pays to arrange the commitments. Here, we show how intention recognition may circumvent such weakness of costly commitments. We describe an evolutionary model, in the context of the one-shot Prisoner's Dilemma, showing that if players first predict the intentions of their co-player and propose a commitment only when they are not confident enough about their prediction, the chances of reaching mutual cooperation are largely enhanced. We find that an advantageous synergy between intention recognition and costly commitments depends strongly on the confidence and accuracy of intention recognition. In general, we observe an intermediate level of confidence threshold leading to the highest evolutionary advantage, showing that neither unconditional use of commitment nor intention recognition can perform optimally. Rather, our results show that arranging commitments is not always desirable, but that they may be also unavoidable depending on the strength of the dilemma.


Assuntos
Comportamento Cooperativo , Humanos , Recompensa
17.
J R Soc Interface ; 12(103)2015 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-25540240

RESUMO

When creating a public good, strategies or mechanisms are required to handle defectors. We first show mathematically and numerically that prior agreements with posterior compensations provide a strategic solution that leads to substantial levels of cooperation in the context of public goods games, results that are corroborated by available experimental data. Notwithstanding this success, one cannot, as with other approaches, fully exclude the presence of defectors, raising the question of how they can be dealt with to avoid the demise of the common good. We show that both avoiding creation of the common good, whenever full agreement is not reached, and limiting the benefit that disagreeing defectors can acquire, using costly restriction mechanisms, are relevant choices. Nonetheless, restriction mechanisms are found the more favourable, especially in larger group interactions. Given decreasing restriction costs, introducing restraining measures to cope with public goods free-riding issues is the ultimate advantageous solution for all participants, rather than avoiding its creation.


Assuntos
Jogos Experimentais , Modelos Teóricos , Humanos
18.
Sci Rep ; 3: 2695, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24045873

RESUMO

When starting a new collaborative endeavor, it pays to establish upfront how strongly your partner commits to the common goal and what compensation can be expected in case the collaboration is violated. Diverse examples in biological and social contexts have demonstrated the pervasiveness of making prior agreements on posterior compensations, suggesting that this behavior could have been shaped by natural selection. Here, we analyze the evolutionary relevance of such a commitment strategy and relate it to the costly punishment strategy, where no prior agreements are made. We show that when the cost of arranging a commitment deal lies within certain limits, substantial levels of cooperation can be achieved. Moreover, these levels are higher than that achieved by simple costly punishment, especially when one insists on sharing the arrangement cost. Not only do we show that good agreements make good friends, agreements based on shared costs result in even better outcomes.


Assuntos
Comportamento Cooperativo , Relações Interpessoais , Algoritmos , Humanos , Modelos Teóricos , Comportamento Social
19.
Artif Life ; 18(4): 365-83, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22938562

RESUMO

Intention recognition is ubiquitous in most social interactions among humans and other primates. Despite this, the role of intention recognition in the emergence of cooperative actions remains elusive. Resorting to the tools of evolutionary game theory, herein we describe a computational model showing how intention recognition coevolves with cooperation in populations of self-regarding individuals. By equipping some individuals with the capacity of assessing the intentions of others in the course of a prototypical dilemma of cooperation-the repeated prisoner's dilemma-we show how intention recognition is favored by natural selection, opening a window of opportunity for cooperation to thrive. We introduce a new strategy (IR) that is able to assign an intention to the actions of opponents, on the basis of an acquired corpus consisting of possible plans achieving that intention, as well as to then make decisions on the basis of such recognized intentions. The success of IR is grounded on the free exploitation of unconditional cooperators while remaining robust against unconditional defectors. In addition, we show how intention recognizers do indeed prevail against the best-known successful strategies of iterated dilemmas of cooperation, even in the presence of errors and reduction of fitness associated with a small cognitive cost for performing intention recognition.


Assuntos
Evolução Biológica , Simulação por Computador , Comportamento Cooperativo , Teoria dos Jogos , Intenção , Aprendizagem , Seleção Genética
20.
Theor Popul Biol ; 81(4): 264-72, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22406614

RESUMO

The analysis of equilibrium points in biological dynamical systems has been of great interest in a variety of mathematical approaches to biology, such as population genetics, theoretical ecology or evolutionary game theory. The maximal number of equilibria and their classification based on stability have been the primary subjects of these studies, for example in the context of two-player games with multiple strategies. Herein, we address a different question using evolutionary game theory as a tool. If the payoff matrices are drawn randomly from an arbitrary distribution, what are the probabilities of observing a certain number of (stable) equilibria? We extend the domain of previous results for the two-player framework, which corresponds to a single diploid locus in population genetics, by addressing the full complexity of multi-player games with multiple strategies. In closing, we discuss an application and illustrate how previous results on the number of equilibria, such as the famous Feldman-Karlin conjecture on the maximal number of isolated fixed points in a viability selection model, can be obtained as special cases of our results based on multi-player evolutionary games. We also show how the probability of realizing a certain number of equilibria changes as we increase the number of players and number of strategies.


Assuntos
Teoria dos Jogos , Modelos Teóricos , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA