Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Publication year range
1.
Proc Natl Acad Sci U S A ; 121(24): e2403116121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38848300

ABSTRACT

Recent advancements in large language models (LLMs) have raised the prospect of scalable, automated, and fine-grained political microtargeting on a scale previously unseen; however, the persuasive influence of microtargeting with LLMs remains unclear. Here, we build a custom web application capable of integrating self-reported demographic and political data into GPT-4 prompts in real-time, facilitating the live creation of unique messages tailored to persuade individual users on four political issues. We then deploy this application in a preregistered randomized control experiment (n = 8,587) to investigate the extent to which access to individual-level data increases the persuasive influence of GPT-4. Our approach yields two key findings. First, messages generated by GPT-4 were broadly persuasive, in some cases increasing support for an issue stance by up to 12 percentage points. Second, in aggregate, the persuasive impact of microtargeted messages was not statistically different from that of non-microtargeted messages (4.83 vs. 6.20 percentage points, respectively, P = 0.226). These trends hold even when manipulating the type and number of attributes used to tailor the message. These findings suggest-contrary to widespread speculation-that the influence of current LLMs may reside not in their ability to tailor messages to individuals but rather in the persuasiveness of their generic, nontargeted messages. We release our experimental dataset, GPTarget2024, as an empirical baseline for future research.


Subject(s)
Persuasive Communication , Politics , Humans , Language
2.
Philos Trans R Soc Lond B Biol Sci ; 379(1897): 20230040, 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38244594

ABSTRACT

Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorporating normative cues directly into the social media user interface. Across three pre-registered experiments in the US (N = 1709) and UK (N = 804), we found that crowdsourcing accuracy judgements by adding a Misleading count (next to the Like count) reduced participants' reported likelihood to share inaccurate information about partisan issues by 25% (compared with a control condition). The Misleading count was also more effective when it reflected in-group norms (from fellow Democrats/Republicans) compared with the norms of general users, though this effect was absent in a less politically polarized context (UK). Moreover, the normative intervention was roughly five times as effective as another popular misinformation intervention (i.e. the accuracy nudge reduced sharing misinformation by 5%). Extreme partisanship did not undermine the effectiveness of the intervention. Our results suggest that identity-based interventions based on the science of social norms can be more effective than identity-neutral alternatives to counter partisan misinformation in politically polarized contexts (e.g. the US). This article is part of the theme issue 'Social norm change: drivers and consequences'.


Subject(s)
Cues , Judgment , Humans , Probability , Social Norms , Communication
3.
PNAS Nexus ; 2(6): pgad189, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37333765

ABSTRACT

During political campaigns, candidates use rhetoric to advance competing visions and assessments of their country. Research reveals that the moral language used in this rhetoric can significantly influence citizens' political attitudes and behaviors; however, the moral language actually used in the rhetoric of elites during political campaigns remains understudied. Using a data set of every tweet (N=139,412) published by 39 US presidential candidates during the 2016 and 2020 primary elections, we extracted moral language and constructed network models illustrating how candidates' rhetoric is semantically connected. These network models yielded two key discoveries. First, we find that party affiliation clusters can be reconstructed solely based on the moral words used in candidates' rhetoric. Within each party, popular moral values are expressed in highly similar ways, with Democrats emphasizing careful and just treatment of individuals and Republicans emphasizing in-group loyalty and respect for social hierarchies. Second, we illustrate the ways in which outsider candidates like Donald Trump can separate themselves during primaries by using moral rhetoric that differs from their parties' common language. Our findings demonstrate the functional use of strategic moral rhetoric in a campaign context and show that unique methods of text network analysis are broadly applicable to the study of campaigns and social movements.

SELECTION OF CITATIONS
SEARCH DETAIL