Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
iScience ; 24(12): 103505, 2021 Dec 17.
Article in English | MEDLINE | ID: mdl-34934924

ABSTRACT

Competition for social influence is a major force shaping societies, from baboons guiding their troop in different directions, to politicians competing for voters, to influencers competing for attention on social media. Social influence is invariably a competitive exercise with multiple influencers competing for it. We study which strategy maximizes social influence under competition. Applying game theory to a scenario where two advisers compete for the attention of a client, we find that the rational solution for advisers is to communicate truthfully when favored by the client, but to lie when ignored. Across seven pre-registered studies, testing 802 participants, such a strategic adviser consistently outcompeted an honest adviser. Strategic dishonesty outperformed truth-telling in swaying individual voters, the majority vote in anonymously voting groups, and the consensus vote in communicating groups. Our findings help explain the success of political movements that thrive on disinformation, and vocal underdog politicians with no credible program.

2.
iScience ; 24(6): 102679, 2021 Jun 25.
Article in English | MEDLINE | ID: mdl-34189440

ABSTRACT

We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.

3.
Med Health Care Philos ; 24(3): 329-340, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33733389

ABSTRACT

An effective method to increase the number of potential cadaveric organ donors is to make people donors by default with the option to opt out. This non-coercive public policy tool to influence people's choices is often justified on the basis of the as-judged-by-themselves principle: people are nudged into choosing what they themselves truly want. We review three often hypothesized reasons for why defaults work and argue that the as-judged-by-themselves principle may hold only in two of these cases. We specify further conditions for when the principle can hold in these cases and show that whether those conditions are met is often unclear. We recommend ways to expand nationwide surveys to identify the actual reasons for why defaults work and discuss mandated choice policy as a viable solution to many arising conundrums.


Subject(s)
Tissue and Organ Procurement , Humans , Public Policy , Surveys and Questionnaires , Tissue Donors
SELECTION OF CITATIONS
SEARCH DETAIL