Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Nat Hum Behav ; 8(6): 1044-1052, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38740990

RESUMEN

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.


Asunto(s)
Comunicación , Humanos , Medios de Comunicación Sociales , Decepción , Normas Sociales
2.
Curr Opin Psychol ; 55: 101739, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38091666

RESUMEN

Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people's engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages-source and information selection-as relatively neglected processes that should be addressed to further improve people's ability to contend with misinformation.


Asunto(s)
Comunicación , Internet , Humanos , Desinformación , Medios de Comunicación Sociales
3.
Eur Psychol ; 28(3): a000493, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37994309

RESUMEN

The spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem because misinformation can harm both the public and democracies. To address the spread of misinformation, policymakers require a successful interface between science and policy, as well as a range of evidence-based solutions that respect fundamental rights while efficiently mitigating the harms of misinformation online. In this article, we discuss how regulatory and nonregulatory instruments can be informed by scientific research and used to reach EU policy objectives. First, we consider what it means to approach misinformation as a policy problem. We then outline four building blocks for cooperation between scientists and policymakers who wish to address the problem of misinformation: understanding the misinformation problem, understanding the psychological drivers and public perceptions of misinformation, finding evidence-based solutions, and co-developing appropriate policy measures. Finally, through the lens of psychological science, we examine policy instruments that have been proposed in the EU, focusing on the strengthened Code of Practice on Disinformation 2022.

4.
Curr Dir Psychol Sci ; 32(1): 81-88, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37994317

RESUMEN

Low-quality and misleading information online can hijack people's attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring-choosing what to ignore and where to invest one's limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one's digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators' thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today's attention economy.

5.
Perspect Psychol Sci ; : 17456916231188052, 2023 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-37669014

RESUMEN

Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is implicit social bias-unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.

6.
Proc Natl Acad Sci U S A ; 120(7): e2210666120, 2023 02 14.
Artículo en Inglés | MEDLINE | ID: mdl-36749721

RESUMEN

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.


Asunto(s)
Medios de Comunicación Sociales , Habla , Humanos , Comunicación , Principios Morales , Emociones , Política
7.
JMIR Public Health Surveill ; 8(7): e32969, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35377317

RESUMEN

BACKGROUND: In response to the COVID-19 pandemic, countries are introducing digital passports that allow citizens to return to normal activities if they were previously infected with (immunity passport) or vaccinated against (vaccination passport) SARS-CoV-2. To be effective, policy decision-makers must know whether these passports will be widely accepted by the public and under what conditions. This study focuses attention on immunity passports, as these may prove useful in countries both with and without an existing COVID-19 vaccination program; however, our general findings also extend to vaccination passports. OBJECTIVE: We aimed to assess attitudes toward the introduction of immunity passports in six countries, and determine what social, personal, and contextual factors predicted their support. METHODS: We collected 13,678 participants through online representative sampling across six countries-Australia, Japan, Taiwan, Germany, Spain, and the United Kingdom-during April to May of the 2020 COVID-19 pandemic, and assessed attitudes and support for the introduction of immunity passports. RESULTS: Immunity passport support was moderate to low, being the highest in Germany (775/1507 participants, 51.43%) and the United Kingdom (759/1484, 51.15%); followed by Taiwan (2841/5989, 47.44%), Australia (963/2086, 46.16%), and Spain (693/1491, 46.48%); and was the lowest in Japan (241/1081, 22.94%). Bayesian generalized linear mixed effects modeling was used to assess predictive factors for immunity passport support across countries. International results showed neoliberal worldviews (odds ratio [OR] 1.17, 95% CI 1.13-1.22), personal concern (OR 1.07, 95% CI 1.00-1.16), perceived virus severity (OR 1.07, 95% CI 1.01-1.14), the fairness of immunity passports (OR 2.51, 95% CI 2.36-2.66), liking immunity passports (OR 2.77, 95% CI 2.61-2.94), and a willingness to become infected to gain an immunity passport (OR 1.6, 95% CI 1.51-1.68) were all predictive factors of immunity passport support. By contrast, gender (woman; OR 0.9, 95% CI 0.82-0.98), immunity passport concern (OR 0.61, 95% CI 0.57-0.65), and risk of harm to society (OR 0.71, 95% CI 0.67-0.76) predicted a decrease in support for immunity passports. Minor differences in predictive factors were found between countries and results were modeled separately to provide national accounts of these data. CONCLUSIONS: Our research suggests that support for immunity passports is predicted by the personal benefits and societal risks they confer. These findings generalized across six countries and may also prove informative for the introduction of vaccination passports, helping policymakers to introduce effective COVID-19 passport policies in these six countries and around the world.


Asunto(s)
COVID-19 , Pandemias , Actitud , Teorema de Bayes , COVID-19/epidemiología , COVID-19/prevención & control , Vacunas contra la COVID-19 , Femenino , Humanos , Pandemias/prevención & control , SARS-CoV-2 , Vacunación
8.
Sci Rep ; 11(1): 18716, 2021 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-34548550

RESUMEN

The COVID-19 pandemic has seen one of the first large-scale uses of digital contact tracing to track a chain of infection and contain the spread of a virus. The new technology has posed challenges both for governments aiming at high and effective uptake and for citizens weighing its benefits (e.g., protecting others' health) against the potential risks (e.g., loss of data privacy). Our cross-sectional survey with repeated measures across four samples in Germany ([Formula: see text]) focused on psychological factors contributing to the public adoption of digital contact tracing. We found that public acceptance of privacy-encroaching measures (e.g., granting the government emergency access to people's medical records or location tracking data) decreased over the course of the pandemic. Intentions to use contact tracing apps-hypothetical ones or the Corona-Warn-App launched in Germany in June 2020-were high. Users and non-users of the Corona-Warn-App differed in their assessment of its risks and benefits, in their knowledge of the underlying technology, and in their reasons to download or not to download the app. Trust in the app's perceived security and belief in its effectiveness emerged as psychological factors playing a key role in its adoption. We incorporate our findings into a behavioral framework for digital contact tracing and provide policy recommendations.


Asunto(s)
COVID-19/epidemiología , Trazado de Contacto , Percepción , Adulto , Anciano , COVID-19/patología , COVID-19/virología , Estudios Transversales , Femenino , Alemania/epidemiología , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Aplicaciones Móviles , Pandemias , Privacidad , Salud Pública , SARS-CoV-2/aislamiento & purificación , Índice de Severidad de la Enfermedad , Confianza
9.
Psychol Sci Public Interest ; 21(3): 103-156, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33325331

RESUMEN

The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions. Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximize commercial interests, capture and sustain users' attention, monetize user data, and predict and influence future behavior. This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation. Our goal is to present a conceptual map of interventions that are based on insights from psychological science. We begin by systematically outlining how online and offline environments differ despite being increasingly inextricable. We then identify four major types of challenges that users encounter in online environments: persuasive and manipulative choice architectures, AI-assisted information architectures, false and misleading information, and distracting environments. Next, we turn to how psychological science can inform interventions to counteract these challenges of the digital world. After distinguishing among three types of behavioral and cognitive interventions-nudges, technocognition, and boosts-we focus on boosts, of which we identify two main groups: (a) those aimed at enhancing people's agency in their digital environments (e.g., self-nudging, deliberate ignorance) and (b) those aimed at boosting competencies of reasoning and resilience to manipulation (e.g., simple decision aids, inoculation). These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information.


Asunto(s)
Conducta de Elección , Cognición , Técnicas de Apoyo para la Decisión , Difusión de la Información , Internet , Atención , Toma de Decisiones , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA