Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 310
Filter
1.
PLoS One ; 19(7): e0305362, 2024.
Article in English | MEDLINE | ID: mdl-38976665

ABSTRACT

Disinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level. This model predicts the fact-checking of sentences and evaluates the veracity of the entire article. After training this model on our corpus, we achieved remarkable results in the binary classification of sentences (check-worthiness F1: 0.749, fact-checking F1: 0.698) and in the final classification of complete articles (F1: 0.703). We also tested its performance against another public dataset and found that it performed better than most systems evaluated on that dataset. Moreover, the corpus we provide differs from other existing corpora in its duality of sentence-article annotation, which can provide an additional level of justification of the prediction of truth or untruth made by the model.


Subject(s)
Disinformation , Humans , Neural Networks, Computer , Natural Language Processing , Deception
2.
Nature ; 630(8018): 807-809, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38890516
4.
Nature ; 630(8015): 45-53, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38840013

ABSTRACT

The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.


Subject(s)
Communication , Disinformation , Internet , Humans , Algorithms , Motivation , Social Media
5.
Nature ; 630(8015): 123-131, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38840014

ABSTRACT

The financial motivation to earn advertising revenue has been widely conjectured to be pivotal for the production of online misinformation1-4. Research aimed at mitigating misinformation has so far focused on interventions at the user level5-8, with little emphasis on how the supply of misinformation can itself be countered. Here we show how online misinformation is largely financed by advertising, examine how financing misinformation affects the companies involved, and outline interventions for reducing the financing of misinformation. First, we find that advertising on websites that publish misinformation is pervasive for companies across several industries and is amplified by digital advertising platforms that algorithmically distribute advertising across the web. Using an information-provision experiment9, we find that companies that advertise on websites that publish misinformation can face substantial backlash from their consumers. To examine why misinformation continues to be monetized despite the potential backlash for the advertisers involved, we survey decision-makers at companies. We find that most decision-makers are unaware that their companies' advertising appears on misinformation websites but have a strong preference to avoid doing so. Moreover, those who are unaware and uncertain about their company's role in financing misinformation increase their demand for a platform-based solution to reduce monetizing misinformation when informed about how platforms amplify advertising placement on misinformation websites. We identify low-cost, scalable information-based interventions to reduce the financial incentive to misinform and counter the supply of misinformation online.


Subject(s)
Advertising , Consumer Behavior , Decision Making , Disinformation , Industry , Internet , Humans , Advertising/economics , Communication , Industry/economics , Internet/economics , Motivation , Uncertainty , Male , Female
6.
Nature ; 630(8015): 132-140, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38840016

ABSTRACT

The social media platforms of the twenty-first century have an enormous role in regulating speech in the USA and worldwide1. However, there has been little research on platform-wide interventions on speech2,3. Here we evaluate the effect of the decision by Twitter to suddenly deplatform 70,000 misinformation traffickers in response to the violence at the US Capitol on 6 January 2021 (a series of events commonly known as and referred to here as 'January 6th'). Using a panel of more than 500,000 active Twitter users4,5 and natural experimental designs6,7, we evaluate the effects of this intervention on the circulation of misinformation on Twitter. We show that the intervention reduced circulation of misinformation by the deplatformed users as well as by those who followed the deplatformed users, though we cannot identify the magnitude of the causal estimates owing to the co-occurrence of the deplatforming intervention with the events surrounding January 6th. We also find that many of the misinformation traffickers who were not deplatformed left Twitter following the intervention. The results inform the historical record surrounding the insurrection, a momentous event in US history, and indicate the capacity of social media platforms to control the circulation of misinformation, and more generally to regulate public discourse.


Subject(s)
Disinformation , Federal Government , Social Media , Violence , Humans , Social Media/ethics , Social Media/standards , Social Media/statistics & numerical data , Social Media/trends , United States , Violence/psychology
7.
RECIIS (Online) ; 18(2)abr.-jun. 2024.
Article in Portuguese | LILACS, Coleciona SUS | ID: biblio-1561816

ABSTRACT

Neste artigo, analisamos as temáticas, os posicionamentos, as formas expressivas, os atores legitimados e os recursos visuais e sonoros empregados na produção de 482 vídeos sobre vacinas, publicados de 2020 a 2022 na plataforma de vídeos curtos Kwai. A partir de análise temática e de análise de conteúdo, identificamos que os vídeos apresentaram, em sua maioria, posicionamento favorável ou neutro em relação às vacinas e que ressaltaram as experiências pessoais com a vacinação. Não obstante, utilizaram sobretudo um tom hu-morístico no tratamento do assunto, com potencial desinformativo quanto aos efeitos colaterais das vacinas. Concluímos assim que, por um lado, o Kwai tem sido utilizado para expressão de experiências positivas com a vacinação, que podem estimular a adesão aos imunizantes, mas, por outro, tem sido também espaço para a circulação de percepções negativas e temores que podem suscitar dúvidas quanto à segurança das vacinas.


In this article, we analyze the themes, positions, expressive forms, legitimized actors, and both visual and sound resources used in the production of 482 videos about vaccines, published from 2020 to 2022 on the short video platform Kwai. Based on thematic analysis and content analysis, we identified that the videos predominantly presented a favorable or neutral stance toward vaccines and that they highlighted personal experiences with immunization. However, they mainly used a humorous tone when dealing with the subject, thereby potentially disseminating misinformation regarding the adverse effects of vaccines. We thus conclude that, on the one hand, Kwai has been used to express positive experiences with vaccination, which can stimulate adherence to vaccinations, but, on the other hand, it has also been a space for the circulation of negative perceptions and fears that can raise doubts regarding the safety of vaccines.


En este artículo, analizamos los temas, posiciones, formas expresivas, actores legitimados y recursos visuales y sonoros utilizados en la producción de 482 videos sobre vacunas publicados de 2020 a 2022 en la plataforma de videos cortos Kwai. A partir del análisis temático y de contenido, identificamos que los videos presentaban, en su mayoría, una posición favorable o neutral con relación a las vacunas y que destacaban experiencias per-sonales con la vacunación. Sin embargo, al abordar el tema, utilizaron, principalmente, un tono humorístico, con potencial desinformativo sobre los efectos secundarios de las vacunas. De ese modo, concluimos que, por un lado, el Kwai tiene sido utilizado para expresar experiencias positivas con la vacunación que pueden estimular la adhesión a las vacunas, pero, por otro lado, también tiene sido un espacio para la circulación de percepciones negativas y miedos que pueden plantear dudas sobre la seguridad de las vacunas.


Subject(s)
Vaccines , Information Dissemination , Online Social Networking , COVID-19 , Disinformation , Brazil , Immunization , Computer Security , Communication , Webcasts as Topic , Webcast , Social Networking , Social Network Analysis
8.
Int J Med Inform ; 188: 105478, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38743994

ABSTRACT

BACKGROUND: Health misinformation (HM) has emerged as a prominent social issue in recent years, driven by declining public trust, popularisation of digital media platforms and escalating public health crisis. Since the Covid-19 pandemic, HM has raised critical concerns due to its significant impacts on both individuals and society as a whole. A comprehensive understanding of HM and HM-related studies would be instrumental in identifying possible solutions to address HM and the associated challenges. METHODS: Following the PRISMA procedure, 11,739 papers published from January 2013 to December 2022 were retrieved from five electronic databases, and 813 papers matching the inclusion criteria were retained for further analysis. This article critically reviewed HM-related studies, detailing the factors facilitating HM creation and dissemination, negative impacts of HM, solutions to HM, and research methods employed in those studies. RESULTS: A growing number of studies have focused on HM since 2013. Results of this study highlight that trust plays a significant while latent role in the circuits of HM, facilitating the creation and dissemination of HM, exacerbating the negative impacts of HM and amplifying the difficulty in addressing HM. CONCLUSION: For health authorities and governmental institutions, it is essential to systematically build public trust in order to reduce the probability of individuals acceptation of HM and to improve the effectiveness of misinformation correction. Future studies should pay more attention to the role of trust in how to address HM.


Subject(s)
COVID-19 , Communication , Humans , COVID-19/epidemiology , Health Communication/standards , Information Dissemination , Public Health , SARS-CoV-2 , Social Media , Trust , Disinformation
11.
PLoS One ; 19(4): e0301818, 2024.
Article in English | MEDLINE | ID: mdl-38593132

ABSTRACT

The widespread dissemination of misinformation on social media is a serious threat to global health. To a large extent, it is still unclear who actually shares health-related misinformation deliberately and accidentally. We conducted a large-scale online survey among 5,307 Facebook users in six sub-Saharan African countries, in which we collected information on sharing of fake news and truth discernment. We estimate the magnitude and determinants of deliberate and accidental sharing of misinformation related to three vaccines (HPV, polio, and COVID-19). In an OLS framework we relate the actual sharing of fake news to several socioeconomic characteristics (age, gender, employment status, education), social media consumption, personality factors and vaccine-related characteristics while controlling for country and vaccine-specific effects. We first show that actual sharing rates of fake news articles are substantially higher than those reported from developed countries and that most of the sharing occurs accidentally. Second, we reveal that the determinants of deliberate vs. accidental sharing differ. While deliberate sharing is related to being older and risk-loving, accidental sharing is associated with being older, male, and high levels of trust in institutions. Lastly, we demonstrate that the determinants of sharing differ by the adopted measure (intentions vs. actual sharing) which underscores the limitations of commonly used intention-based measures to derive insights about actual fake news sharing behaviour.


Subject(s)
Infertility , Social Media , Vaccines , Humans , Male , Disinformation , Africa South of the Sahara/epidemiology
12.
PLoS One ; 19(4): e0301364, 2024.
Article in English | MEDLINE | ID: mdl-38630681

ABSTRACT

Although a rich academic literature examines the use of fake news by foreign actors for political manipulation, there is limited research on potential foreign intervention in capital markets. To address this gap, we construct a comprehensive database of (negative) fake news regarding U.S. firms by scraping prominent fact-checking sites. We identify the accounts that spread the news on Twitter (now X) and use machine-learning techniques to infer the geographic locations of these fake news spreaders. Our analysis reveals that corporate fake news is more likely than corporate non-fake news to be spread by foreign accounts. At the country level, corporate fake news is more likely to originate from African and Middle Eastern countries and tends to increase during periods of high geopolitical tension. At the firm level, firms operating in uncertain information environments and strategic industries are more likely to be targeted by foreign accounts. Overall, our findings provide initial evidence of foreign-originating misinformation in capital markets and thus have important policy implications.


Subject(s)
Disinformation , Geography , Databases, Factual , Industry
13.
PLoS One ; 19(3): e0299031, 2024.
Article in English | MEDLINE | ID: mdl-38478479

ABSTRACT

Public comments are an important opinion for civic when the government establishes rules. However, recent AI can easily generate large quantities of disinformation, including fake public comments. We attempted to distinguish between human public comments and ChatGPT-generated public comments (including ChatGPT emulated that of humans) using Japanese stylometric analysis. Study 1 conducted multidimensional scaling (MDS) to compare 500 texts of five classes: Human public comments, GPT-3.5 and GPT-4 generated public comments only by presenting the titles of human public comments (i.e., zero-shot learning, GPTzero), GPT-3.5 and GPT-4 emulated by presenting sentences of human public comments and instructing to emulate that (i.e., one-shot learning, GPTone). The MDS results showed that the Japanese stylometric features of the public comments were completely different from those of the GPTzero-generated texts. Moreover, GPTone-generated public comments were closer to those of humans than those generated by GPTzero. In Study 2, the performance levels of the random forest (RF) classifier for distinguishing three classes (human, GPTzero, and GPTone texts). RF classifiers showed the best precision for the human public comments of approximately 90%, and the best precision for the fake public comments generated by GPT (GPTzero and GPTone) was 99.5% by focusing on integrated next writing style features: phrase patterns, parts-of-speech (POS) bigram and trigram, and function words. Therefore, the current study concluded that we could discriminate between GPT-generated fake public comments and those written by humans at the present time.


Subject(s)
Disinformation , Learning , Humans , Japan , Government , Multidimensional Scaling Analysis
14.
BMJ ; 384: q579, 2024 03 20.
Article in English | MEDLINE | ID: mdl-38508671
15.
BMJ ; 384: e078538, 2024 03 20.
Article in English | MEDLINE | ID: mdl-38508682

ABSTRACT

OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.


Subject(s)
Camelids, New World , Skin Neoplasms , Humans , Animals , Disinformation , Artificial Intelligence , Cross-Sectional Studies , Sunscreening Agents , Language
16.
PLoS One ; 19(3): e0300497, 2024.
Article in English | MEDLINE | ID: mdl-38512834

ABSTRACT

Disinformation-false information intended to cause harm or for profit-is pervasive. While disinformation exists in several domains, one area with great potential for personal harm from disinformation is healthcare. The amount of disinformation about health issues on social media has grown dramatically over the past several years, particularly in response to the COVID-19 pandemic. The study described in this paper sought to determine the characteristics of multimedia social network posts that lead them to believe and potentially act on healthcare disinformation. The study was conducted in a neuroscience laboratory in early 2022. Twenty-six study participants each viewed a series of 20 either honest or dishonest social media posts, dealing with various aspects of healthcare. They were asked to determine if the posts were true or false and then to provide the reasoning behind their choices. Participant gaze was captured through eye tracking technology and investigated through "area of interest" analysis. This approach has the potential to discover the elements of disinformation that help convince the viewer a given post is true. Participants detected the true nature of the posts they were exposed to 69% of the time. Overall, the source of the post, whether its claims seemed reasonable, and the look and feel of the post were the most important reasons they cited for determining whether it was true or false. Based on the eye tracking data collected, the factors most associated with successfully detecting disinformation were the total number of fixations on key words and the total number of revisits to source information. The findings suggest the outlines of generalizations about why people believe online disinformation, suggesting a basis for the development of mid-range theory.


Subject(s)
COVID-19 , Social Media , Humans , Disinformation , Pandemics , Health Facilities , Laboratories , COVID-19/epidemiology
17.
RECIIS (Online) ; 18(1)jan.-mar. 2024.
Article in Portuguese | LILACS, Coleciona SUS | ID: biblio-1553441

ABSTRACT

Considerando-se a crescente importância do YouTube como fonte para busca de informações em saúde, o objetivo deste trabalho é analisar os fatores associados a um maior número de visualizações de vídeos sobre vacinas contra a covid-19. Para isso, usaram-se técnicas de Processamento de Linguagem Natural e modelagem estatística com base em 13.619 vídeos, abrangendo três tipos de variáveis: métricas gerais, conteúdo textual dos títulos e informações sobre os participantes dos vídeos. Entre os resultados, destacam-se os vídeos de duração média ou longa, postados durante a madrugada e nos fins de semana, com tags, descrição e títulos curtos, além de elementos controversos e presença de figuras masculinas e brancas em miniaturas. Os achados contribuem para uma melhor compreensão dos possíveis fatores a serem considerados na produção de conteúdo de comunicação em saúde sobre vacinas no YouTube.


Considering the growing importance of YouTube as a source for health information search, the aim of this study was to analyze the factors associated with a higher number of views in videos about covid-19 vaccines. For this purpose, Natural Language Processing techniques and statistical modeling were employed based on 13,619 videos, encompassing three types of variables: general metrics, textual content of titles, and information about the participants in the videos. Among the results, videos of medium or long duration, posted during late hours and on weekends, with tags, descriptions, and short titles, along with controversial elements and the presence of male and white figures in thumbnails stand out. These findings contribute to a better understanding of the potential factors to be considered in the production of health communication content about vaccines on YouTube.


Teniendo en cuenta la creciente importancia de YouTube como fuente de búsqueda de información en salud, el objetivo de este artículo es analizar los factores asociados a un mayor número de visualizaciones en videos sobre vacunas contra el covid-19. Para eso, se emplearon técnicas de Procesamiento del Lenguaje Natural y modelado estadístico basadas en 13,619 videos, que abarcan tres tipos de variables: métricas generales, contenido textual de títulos y información sobre los participantes en los videos. Entre los resultados, destacan los videos de duración media o larga, publicados durante altas horas de la noche y los fines de semana, con tags, descripciones y títulos cortos, junto con elementos controvertidos y la presencia de figuras masculinas y blancas en las miniaturas. Estos hallazgos contribuyen a una mejor comprensión de los posibles factores a tener en cuenta en la producción de contenido de comunicación de salud sobre vacunas en YouTube.


Subject(s)
Communications Media , Information Dissemination , Health Communication , Social Media , COVID-19 Vaccines , COVID-19 , Health Education , Access to Information , Disinformation , Mass Media
20.
Article in German | MEDLINE | ID: mdl-38332143

ABSTRACT

Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.


Subject(s)
Health Communication , Social Media , Humans , Disinformation , Digital Health , Germany , Communication
SELECTION OF CITATIONS
SEARCH DETAIL