Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(34): e2308950121, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39133853

RESUMO

The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model (LLM) underlying the AI chatbot ChatGPT, can be used as a tool for automated psychological text analysis in several languages. Across 15 datasets (n = 47,925 manually annotated tweets and news headlines), we tested whether different versions of GPT (3.5 Turbo, 4, and 4 Turbo) can accurately detect psychological constructs (sentiment, discrete emotions, offensiveness, and moral foundations) across 12 languages. We found that GPT (r = 0.59 to 0.77) performed much better than English-language dictionary analysis (r = 0.20 to 0.30) at detecting psychological constructs as judged by manual annotators. GPT performed nearly as well as, and sometimes better than, several top-performing fine-tuned machine learning models. Moreover, GPT's performance improved across successive versions of the model, particularly for lesser-spoken languages, and became less expensive. Overall, GPT may be superior to many existing methods of automated text analysis, since it achieves relatively high accuracy across many languages, requires no training data, and is easy to use with simple prompts (e.g., "is this text negative?") and little coding experience. We provide sample code and a video tutorial for analyzing text with the GPT application programming interface. We argue that GPT and other LLMs help democratize automated text analysis by making advanced natural language processing capabilities more accessible, and may help facilitate more cross-linguistic research with understudied languages.


Assuntos
Multilinguismo , Humanos , Idioma , Aprendizado de Máquina , Processamento de Linguagem Natural , Emoções , Mídias Sociais
2.
Nat Hum Behav ; 8(6): 1044-1052, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38740990

RESUMO

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.


Assuntos
Comunicação , Humanos , Mídias Sociais , Enganação , Normas Sociais
3.
Behav Brain Sci ; 47: e81, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38738361

RESUMO

Social media takes advantage of people's predisposition to attend to threatening stimuli by promoting content in algorithms that capture attention. However, this content is often not what people expressly state they would like to see. We propose that social media companies should weigh users' expressed preferences more heavily in algorithms. We propose modest changes to user interfaces that could reduce the abundance of threatening content in the online environment.


Assuntos
Mídias Sociais , Humanos , Motivação , Algoritmos , Atenção/fisiologia , Internet
4.
Psychol Sci ; 35(4): 435-450, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38506937

RESUMO

The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people's news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this preregistered adversarial collaboration, we tested this question using a multiverse meta-analysis (k = 21; N = 27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline "evaluation" treatments (a critical test for one research team) such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.


Assuntos
Política , Humanos
5.
Sci Adv ; 10(6): eadj5778, 2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38324680

RESUMO

Effectively reducing climate change requires marked, global behavior change. However, it is unclear which strategies are most likely to motivate people to change their climate beliefs and behaviors. Here, we tested 11 expert-crowdsourced interventions on four climate mitigation outcomes: beliefs, policy support, information sharing intention, and an effortful tree-planting behavioral task. Across 59,440 participants from 63 countries, the interventions' effectiveness was small, largely limited to nonclimate skeptics, and differed across outcomes: Beliefs were strengthened mostly by decreasing psychological distance (by 2.3%), policy support by writing a letter to a future-generation member (2.6%), information sharing by negative emotion induction (12.1%), and no intervention increased the more effortful behavior-several interventions even reduced tree planting. Last, the effects of each intervention differed depending on people's initial climate beliefs. These findings suggest that the impact of behavioral climate interventions varies across audiences and target behaviors.


Assuntos
Ciências do Comportamento , Mudança Climática , Humanos , Intenção , Políticas
6.
Curr Opin Psychol ; 56: 101787, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38295623

RESUMO

The spread of misinformation threatens democratic societies, hampering informed decision-making. Partisan identity biases perceptions of reality, promoting false beliefs. The Identity-based Model of Political Belief explains how social identity shapes information processing and contributes to misinformation. According to this model, social identity goals can override accuracy goals, leading to belief alignment with party members rather than facts. We propose an extended version of this model that incorporates the role of informational context in misinformation belief and sharing. Partisanship involves cognitive and motivational aspects that shape party members' beliefs and actions. This includes whether they seek further evidence, where they seek that evidence, and which sources they trust. Understanding the interplay between social identity and accuracy is crucial in addressing misinformation.


Assuntos
Cognição , Motivação , Humanos , Identificação Social , Confiança
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA