Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 422
Filter
Add more filters

Publication year range
1.
CA Cancer J Clin ; 74(5): 453-464, 2024.
Article in English | MEDLINE | ID: mdl-38896503

ABSTRACT

Social media is widely used globally by patients, families of patients, health professionals, scientists, and other stakeholders who seek and share information related to cancer. Despite many benefits of social media for cancer care and research, there is also a substantial risk of exposure to misinformation, or inaccurate information about cancer. Types of misinformation vary from inaccurate information about cancer risk factors or unproven treatment options to conspiracy theories and public relations articles or advertisements appearing as reliable medical content. Many characteristics of social media networks-such as their extensive use and the relative ease it allows to share information quickly-facilitate the spread of misinformation. Research shows that inaccurate and misleading health-related posts on social media often get more views and engagement (e.g., likes, shares) from users compared with accurate information. Exposure to misinformation can have downstream implications for health-related attitudes and behaviors. However, combatting misinformation is a complex process that requires engagement from media platforms, scientific and health experts, governmental organizations, and the general public. Cancer experts, for example, should actively combat misinformation in real time and should disseminate evidence-based content on social media. Health professionals should give information prescriptions to patients and families and support health literacy. Patients and families should vet the quality of cancer information before acting upon it (e.g., by using publicly available checklists) and seek recommended resources from health care providers and trusted organizations. Future multidisciplinary research is needed to identify optimal ways of building resilience and combating misinformation across social media.


Subject(s)
Communication , Neoplasms , Social Media , Humans , Neoplasms/psychology , Neoplasms/therapy , Information Dissemination/methods
2.
Health Econ ; 33(1): 82-106, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37792290

ABSTRACT

In the context of the COVID-19 pandemic, we develop and test experimentally three phone-based interventions to increase vaccine acceptance in Mozambique. The first endorses the vaccine with a simple positive message. The second adds the activation of social memory on the country's success in eradicating wild polio with vaccination campaigns. The third further adds a structured interaction with the participant to develop a critical view toward misleading information and minimize the sharing of fake news. We find that combining the endorsement with the stimulation of social memory and the structured interaction increases vaccine acceptance and trust in institutions.


Subject(s)
COVID-19 , Pandemics , Humans , Pandemics/prevention & control , COVID-19/prevention & control , Mozambique , Trust , Vaccination
3.
Proc Natl Acad Sci U S A ; 118(15)2021 04 13.
Article in English | MEDLINE | ID: mdl-33837144

ABSTRACT

Previous research indicated that corrective information can sometimes provoke a so-called "backfire effect" in which respondents more strongly endorsed a misperception about a controversial political or scientific issue when their beliefs or predispositions were challenged. I show how subsequent research and media coverage seized on this finding, distorting its generality and exaggerating its role relative to other factors in explaining the durability of political misperceptions. To the contrary, an emerging research consensus finds that corrective information is typically at least somewhat effective at increasing belief accuracy when received by respondents. However, the research that I review suggests that the accuracy-increasing effects of corrective information like fact checks often do not last or accumulate; instead, they frequently seem to decay or be overwhelmed by cues from elites and the media promoting more congenial but less accurate claims. As a result, misperceptions typically persist in public opinion for years after they have been debunked. Given these realities, the primary challenge for scientific communication is not to prevent backfire effects but instead, to understand how to target corrective information better and to make it more effective. Ultimately, however, the best approach is to disrupt the formation of linkages between group identities and false claims and to reduce the flow of cues reinforcing those claims from elites and the media. Doing so will require a shift from a strategy focused on providing information to the public to one that considers the roles of intermediaries in forming and maintaining belief systems.


Subject(s)
Communication , Communications Media/trends , Politics , Communications Media/standards , Deception , Humans
4.
Proc Natl Acad Sci U S A ; 118(15)2021 04 13.
Article in English | MEDLINE | ID: mdl-33837146

ABSTRACT

Humans learn about the world by collectively acquiring information, filtering it, and sharing what we know. Misinformation undermines this process. The repercussions are extensive. Without reliable and accurate sources of information, we cannot hope to halt climate change, make reasoned democratic decisions, or control a global pandemic. Most analyses of misinformation focus on popular and social media, but the scientific enterprise faces a parallel set of problems-from hype and hyperbole to publication bias and citation misdirection, predatory publishing, and filter bubbles. In this perspective, we highlight these parallels and discuss future research directions and interventions.


Subject(s)
Biomedical Research/ethics , Health Communication/ethics , Periodicals as Topic/trends , Health Communication/trends , Humans , Mass Media/ethics , Mass Media/trends , Periodicals as Topic/ethics
5.
Proc Natl Acad Sci U S A ; 118(5)2021 02 02.
Article in English | MEDLINE | ID: mdl-33495336

ABSTRACT

Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, "true" and "false" tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines' accuracy. Providing fact-checks after headlines (debunking) improved subsequent truth discernment more than providing the same information during (labeling) or before (prebunking) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers.


Subject(s)
Newspapers as Topic , Humans , Time Factors
6.
J Med Internet Res ; 26: e48130, 2024 Mar 29.
Article in English | MEDLINE | ID: mdl-38551638

ABSTRACT

BACKGROUND: Although researchers extensively study the rapid generation and spread of misinformation about the novel coronavirus during the pandemic, numerous other health-related topics are contaminating the internet with misinformation that have not received as much attention. OBJECTIVE: This study aims to gauge the reach of the most popular medical content on the World Wide Web, extending beyond the confines of the pandemic. We conducted evaluations of subject matter and credibility for the years 2021 and 2022, following the principles of evidence-based medicine with assessments performed by experienced clinicians. METHODS: We used 274 keywords to conduct web page searches through the BuzzSumo Enterprise Application. These keywords were chosen based on medical topics derived from surveys administered to medical practitioners. The search parameters were confined to 2 distinct date ranges: (1) January 1, 2021, to December 31, 2021; (2) January 1, 2022, to December 31, 2022. Our searches were specifically limited to web pages in the Polish language and filtered by the specified date ranges. The analysis encompassed 161 web pages retrieved in 2021 and 105 retrieved in 2022. Each web page underwent scrutiny by a seasoned doctor to assess its credibility, aligning with evidence-based medicine standards. Furthermore, we gathered data on social media engagements associated with the web pages, considering platforms such as Facebook, Pinterest, Reddit, and Twitter. RESULTS: In 2022, the prevalence of unreliable information related to COVID-19 saw a noteworthy decline compared to 2021. Specifically, the percentage of noncredible web pages discussing COVID-19 and general vaccinations decreased from 57% (43/76) to 24% (6/25) and 42% (10/25) to 30% (3/10), respectively. However, during the same period, there was a considerable uptick in the dissemination of untrustworthy content on social media pertaining to other medical topics. The percentage of noncredible web pages covering cholesterol, statins, and cardiology rose from 11% (3/28) to 26% (9/35) and from 18% (5/28) to 26% (6/23), respectively. CONCLUSIONS: Efforts undertaken during the COVID-19 pandemic to curb the dissemination of misinformation seem to have yielded positive results. Nevertheless, our analysis suggests that these interventions need to be consistently implemented across both established and emerging medical subjects. It appears that as interest in the pandemic waned, other topics gained prominence, essentially "filling the vacuum" and necessitating ongoing measures to address misinformation across a broader spectrum of health-related subjects.


Subject(s)
COVID-19 , Social Media , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Pandemics , Poland/epidemiology , Infodemiology , Communication , Language
7.
Sensors (Basel) ; 24(11)2024 Jun 02.
Article in English | MEDLINE | ID: mdl-38894381

ABSTRACT

This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more secure by training models on decentralized data at decentralized places, for example, at different IoT edges. The data are not transformed between decentralized places, which means that personally identifiable data are not shared. This could increase the security of data from sensors in intelligent houses and medical devices or data from various resources in online spaces. Each station edge could train a model separately on data obtained from its sensors and on data extracted from different sources. Consequently, the models trained on local data on local clients are aggregated at the central ending point. We have designed three different architectures for deep learning as a basis for use within federated learning. The detection models were based on embeddings, CNNs (convolutional neural networks), and LSTM (long short-term memory). The best results were achieved using more LSTM layers (F1 = 0.92). On the other hand, all three architectures achieved similar results. We also analyzed results obtained using federated learning and without it. As a result of the analysis, it was found that the use of federated learning, in which data were decomposed and divided into smaller local datasets, does not significantly reduce the accuracy of the models.

8.
Behav Res Methods ; 56(3): 1863-1899, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37382812

ABSTRACT

Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (Ntotal = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.


Subject(s)
Communication , Judgment , Humans , Psychometrics/methods , Language , Factor Analysis, Statistical
9.
BMC Public Health ; 23(1): 2213, 2023 11 09.
Article in English | MEDLINE | ID: mdl-37946134

ABSTRACT

BACKGROUND: Post-traumatic stress disorder (PTSD) sufferers show problematic patterns of Internet use such as fear of missing out (FOMO) and sharing misinformation and fake news. This study aimed to investigate these associations in survivors of the 2008 earthquake in Wenchuan, China. METHODS: A self-reported survey was completed by 356 survivors of the Wenchuan earthquake. A mediated structural equation model was constructed to test a proposed pattern of associations with FOMO as a mediator of the relationship between PTSD symptoms and belief in fake news, as well as moderators of this pathway. RESULTS: PTSD was directly associated with believing fake news (ß = 0.444, p < .001) and with FOMO (ß = 0.347, p < .001). FOMO mediated the association between PTSD and fake news belief (ß = 0.373, p < .001). Age moderated the direct (ß = 0.148, t = 3.097, p = .002) and indirect (ß = 0.145, t = 3.122, p = .002) pathways, with effects more pronounced with increasing age. Gender was also a moderator, with the indirect effect present in females but not in males (ß = 0.281, t = 6.737, p < .001). CONCLUSION: Those with higher PTSD symptoms are more likely to believe fake news and this is partly explained by FOMO. This effect is present in females and not males and is stronger in older people. Findings extend knowledge of the role of psychological variables in problematic Internet use among those with PTSD.


Subject(s)
Earthquakes , Stress Disorders, Post-Traumatic , Male , Female , Humans , Aged , Stress Disorders, Post-Traumatic/epidemiology , Stress Disorders, Post-Traumatic/psychology , Cross-Sectional Studies , Disinformation , Survivors/psychology , China/epidemiology , Risk Factors
10.
Memory ; 31(1): 137-146, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36170037

ABSTRACT

ABSTRACTMemory for events can be biased. For example, people tend to recall more events that support than oppose their current worldview. The present study examined partisan bias in memory for events related to the January 6, 2021, Capitol riot in the United States. Participants rated their memory for true and false events that were either favourable to their political party or the other major political party in the United States. For both true and false events, participants remembered more events that favoured their political party. Regression analyses showed that the number of false memories that participants reported was positively associated with their tendency to support conspiracy beliefs and with their self-reported engagement with the Capitol riot. These results suggest that Democrats and Republicans remember the Capitol Riot differently and that certain individual difference factors can predict the formation of false memories in this context. Misinformation played an influential role in the Capitol riot and understanding differences in memory for this event is beneficial to avoiding similar tragedies in the future.


Subject(s)
Politics , Riots , Humans , United States , Memory , Communication , Individuality
11.
J Med Internet Res ; 25: e45731, 2023 08 09.
Article in English | MEDLINE | ID: mdl-37556184

ABSTRACT

BACKGROUND: Misinformation poses a serious challenge to clinical and policy decision-making in the health field. The COVID-19 pandemic amplified interest in misinformation and related terms and witnessed a proliferation of definitions. OBJECTIVE: We aim to assess the definitions of misinformation and related terms used in health-related literature. METHODS: We conducted a scoping review of systematic reviews by searching Ovid MEDLINE, Embase, Cochrane, and Epistemonikos databases for articles published within the last 5 years up till March 2023. Eligible studies were systematic reviews that stated misinformation or related terms as part of their objectives, conducted a systematic search of at least one database, and reported at least 1 definition for misinformation or related terms. We extracted definitions for the terms misinformation, disinformation, fake news, infodemic, and malinformation. Within each definition, we identified concepts and mapped them across misinformation-related terms. RESULTS: We included 41 eligible systematic reviews, out of which 32 (78%) reviews addressed the topic of public health emergencies (including the COVID-19 pandemic) and contained 75 definitions for misinformation and related terms. The definitions consisted of 20 for misinformation, 19 for disinformation, 10 for fake news, 24 for infodemic, and 2 for malinformation. "False/inaccurate/incorrect" was mentioned in 15 of 20 definitions of misinformation, 13 of 19 definitions of disinformation, 5 of 10 definitions of fake news, 6 of 24 definitions of infodemic, and 0 of 2 definitions of malinformation. Infodemic had 19 of 24 definitions addressing "information overload" and malinformation had 2 of 2 definitions with "accurate" and 1 definition "used in the wrong context." Out of all the definitions, 56 (75%) were referenced from other sources. CONCLUSIONS: While the definitions of misinformation and related terms in the health field had inconstancies and variability, they were largely consistent. Inconstancies related to the intentionality in misinformation definitions (7 definitions mention "unintentional," while 5 definitions have "intentional"). They also related to the content of infodemic (9 definitions mention "valid and invalid info," while 6 definitions have "false/inaccurate/incorrect"). The inclusion of concepts such as "intentional" may be difficult to operationalize as it is difficult to ascertain one's intentions. This scoping review has the strength of using a systematic method for retrieving articles but does not cover all definitions in the extant literature outside the field of health. This scoping review of the health literature identified several definitions for misinformation and related terms, which showed variability and included concepts that are difficult to operationalize. Health practitioners need to exert caution before labeling a piece of information as misinformation or any other related term and only do so after ascertaining accurateness and sometimes intentionality. Additional efforts are needed to allow future consensus around clear and operational definitions.


Subject(s)
COVID-19 , Humans , Pandemics , Systematic Reviews as Topic , Consensus , Communication
12.
J Med Internet Res ; 25: e45583, 2023 08 24.
Article in English | MEDLINE | ID: mdl-37616030

ABSTRACT

BACKGROUND: Health-related misinformation on social media is a key challenge to effective and timely public health responses. Existing mitigation measures include flagging misinformation or providing links to correct information, but they have not yet targeted social processes. Current approaches focus on increasing scrutiny, providing corrections to misinformation (debunking), or alerting users prospectively about future misinformation (prebunking and inoculation). Here, we provide a test of a complementary strategy that focuses on the social processes inherent in social media use, in particular, social reinforcement, social identity, and injunctive norms. OBJECTIVE: This study aimed to examine whether providing balanced social reference cues (ie, cues that provide information on users sharing and, more importantly, not sharing specific content) in addition to flagging COVID-19-related misinformation leads to reductions in sharing behavior and improvement in overall sharing quality. METHODS: A total of 3 field experiments were conducted on Twitter's native social media feed (via a newly developed browser extension). Participants' feed was augmented to include misleading and control information, resulting in 4 groups: no-information control, Twitter's own misinformation warning (misinformation flag), social cue only, and combined misinformation flag and social cue. We tracked the content shared or liked by participants. Participants were provided with social information by referencing either their personal network on Twitter or all Twitter users. RESULTS: A total of 1424 Twitter users participated in 3 studies (n=824, n=322, and n=278). Across all 3 studies, we found that social cues that reference users' personal network combined with a misinformation flag reduced the sharing of misleading but not control information and improved overall sharing quality. We show that this improvement could be driven by a change in injunctive social norms (study 2) but not social identity (study 3). CONCLUSIONS: Social reference cues combined with misinformation flags can significantly and meaningfully reduce the amount of COVID-19-related misinformation shared and improve overall sharing quality. They are a feasible and scalable way to effectively curb the sharing of COVID-19-related misinformation on social media.


Subject(s)
COVID-19 , Social Media , Humans , Cues , Emotions , Communication
13.
Sensors (Basel) ; 23(4)2023 Feb 04.
Article in English | MEDLINE | ID: mdl-36850346

ABSTRACT

Nowadays, social media has become the main source of news around the world. The spread of fake news on social networks has become a serious global issue, damaging many aspects, such as political, economic, and social aspects, and negatively affecting the lives of citizens. Fake news often carries negative sentiments, and the public's response to it carries the emotions of surprise, fear, and disgust. In this article, we extracted features based on sentiment analysis of news articles and emotion analysis of users' comments regarding this news. These features were fed, along with the content feature of the news, to the proposed bidirectional long short-term memory model to detect fake news. We used the standard Fakeddit dataset that contains news titles and comments posted regarding them to train and test the proposed model. The suggested model, using extracted features, provided a high detection accuracy of 96.77% of the Area under the ROC Curve measure, which is higher than what other state-of-the-art studies offer. The results prove that the features extracted based on sentiment analysis of news, which represents the publisher's stance, and emotion analysis of comments, which represent the crowd's stance, contribute to raising the efficiency of the detection model.


Subject(s)
Social Media , Humans , Disinformation , Sentiment Analysis , Emotions , Fear
14.
Sensors (Basel) ; 23(24)2023 Dec 07.
Article in English | MEDLINE | ID: mdl-38139513

ABSTRACT

Currently, one can observe the evolution of social media networks. In particular, humans are faced with the fact that, often, the opinion of an expert is as important and significant as the opinion of a non-expert. It is possible to observe changes and processes in traditional media that reduce the role of a conventional 'editorial office', placing gradual emphasis on the remote work of journalists and forcing increasingly frequent use of online sources rather than actual reporting work. As a result, social media has become an element of state security, as disinformation and fake news produced by malicious actors can manipulate readers, creating unnecessary debate on topics organically irrelevant to society. This causes a cascading effect, fear of citizens, and eventually threats to the state's security. Advanced data sensors and deep machine learning methods have great potential to enable the creation of effective tools for combating the fake news problem. However, these solutions often need better model generalization in the real world due to data deficits. In this paper, we propose an innovative solution involving a committee of classifiers in order to tackle the fake news detection challenge. In that regard, we introduce a diverse set of base models, each independently trained on sub-corpora with unique characteristics. In particular, we use multi-label text category classification, which helps formulate an ensemble. The experiments were conducted on six different benchmark datasets. The results are promising and open the field for further research.

15.
Pers Individ Dif ; 200: 111893, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36089997

ABSTRACT

Awareness of the potential psychological significance of false news increased during the coronavirus pandemic, however, its impact on psychopathology and individual differences remains unclear. Acknowledging this, the authors investigated the psychological and psychopathological profiles that characterize fake news consumption. A total of 1452 volunteers from the general population with no previous psychiatric history participated. They responded to clinical psychopathology assessment tests. Respondents solved a fake news screening test, which allowed them to be allocated to a quasi-experimental condition: group 1 (non-fake news consumers) or group 2 (fake news consumers). Mean comparison, Bayesian inference, and multiple regression analyses were applied. Participants with a schizotypal, paranoid, and histrionic personality were ineffective at detecting fake news. They were also more vulnerable to suffer its negative effects. Specifically, they displayed higher levels of anxiety and committed more cognitive biases based on suggestibility and the Barnum Effect. No significant effects on psychotic symptomatology or affective mood states were observed. Corresponding to these outcomes, two clinical and therapeutic recommendations related to the reduction of the Barnum Effect and the reinterpretation of digital media sensationalism were made. The impact of fake news and possible ways of prevention are discussed.

16.
Environ Manage ; 71(6): 1188-1198, 2023 06.
Article in English | MEDLINE | ID: mdl-36443526

ABSTRACT

Weakening environmental laws supported by disinformation are currently of concern in Brazil. An example of disinformation is the case of the "firefighter cattle". Supporters of this idea believe that by consuming organic mass, cattle decrease the risk of fire in natural ecosystems. This statement was cited by a member of the Bolsonaro government in response to the unprecedented 2020 fires in the Pantanal, as well as in support of a new law that enables extensive livestock in protected areas of this biome. By suggesting that grazing benefits the ecosystem, the "firefighter cattle" argument supports the interests of agribusiness. However, it ignores the real costs of livestock production on biodiversity. We analysed the social repercussion of the "firefighter cattle" by analysing public reactions to YouTube, Facebook, and Google News posts. These videos and articles and the responses to them either agreed or disagreed with the "firefighter cattle". Supportive posts were shared more on social media and triggered more interactions than critical posts. Even though many netizens disagreed with the idea of "firefighter cattle", it has gone viral, and was used as a tool to strengthen anti-environmental policies. We advocate that government institutions should use resources and guidelines provided by the scientific community to raise awareness. These materials include international reports produced by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) and the Intergovernmental Panel on Climate Change (IPCC). We need to curb pseudoscience and misinformation in political discourse, avoiding misconceptions that threaten natural resources and confuse global society.


Subject(s)
Ecosystem , Social Media , Animals , Cattle , Brazil , Conservation of Natural Resources , Environmental Policy
17.
Sci Eng Ethics ; 29(4): 30, 2023 08 09.
Article in English | MEDLINE | ID: mdl-37555995

ABSTRACT

This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the major characteristics of ethics of care and the phases of care, we offer four algorithmic design principles. The first principle highlights the need to develop a strategy to deal with fake news on the part of the software designers. The second principle calls for the involvement of various stakeholders in the design processes in order to increase the chances of successfully fighting fake news. The third principle suggests allowing end-users to report on fake news. Finally, the last principle proposes keeping the end-user updated on the treatment in the suspected news items. Implementing these principles as care practices can render the developmental process more ethically oriented as well as improve the ability to fight fake news.


Subject(s)
Algorithms , Disinformation , Software , Engineering , Artificial Intelligence
18.
Appl Soft Comput ; 139: 110235, 2023 May.
Article in English | MEDLINE | ID: mdl-36999094

ABSTRACT

The emergence of various social networks has generated vast volumes of data. Efficient methods for capturing, distinguishing, and filtering real and fake news are becoming increasingly important, especially after the outbreak of the COVID-19 pandemic. This study conducts a multiaspect and systematic review of the current state and challenges of graph neural networks (GNNs) for fake news detection systems and outlines a comprehensive approach to implementing fake news detection systems using GNNs. Furthermore, advanced GNN-based techniques for implementing pragmatic fake news detection systems are discussed from multiple perspectives. First, we introduce the background and overview related to fake news, fake news detection, and GNNs. Second, we provide a GNN taxonomy-based fake news detection taxonomy and review and highlight models in categories. Subsequently, we compare critical ideas, advantages, and disadvantages of the methods in categories. Next, we discuss the possible challenges of fake news detection and GNNs. Finally, we present several open issues in this area and discuss potential directions for future research. We believe that this review can be utilized by systems practitioners and newcomers in surmounting current impediments and navigating future situations by deploying a fake news detection system using GNNs.

19.
Inf Process Manag ; 60(2): None, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36874352

ABSTRACT

A news article's online audience provides useful insights about the article's identity. However, fake news classifiers using such information risk relying on profiling. In response to the rising demand for ethical AI, we present a profiling-avoiding algorithm that leverages Twitter users during model optimisation while excluding them when an article's veracity is evaluated. For this, we take inspiration from the social sciences and introduce two objective functions that maximise correlation between the article and its spreaders, and among those spreaders. We applied our profiling-avoiding algorithm to three popular neural classifiers and obtained results on fake news data discussing a variety of news topics. The positive impact on prediction performance demonstrates the soundness of the proposed objective functions to integrate social context in text-based classifiers. Moreover, statistical visualisation and dimension reduction techniques show that the user-inspired classifiers better discriminate between unseen fake and true news in their latent spaces. Our study serves as a stepping stone to resolve the underexplored issue of profiling-dependent decision-making in user-informed fake news detection.

20.
Entropy (Basel) ; 25(4)2023 Apr 04.
Article in English | MEDLINE | ID: mdl-37190402

ABSTRACT

Multi-modal fake news detection aims to identify fake information through text and corresponding images. The current methods purely combine images and text scenarios by a vanilla attention module but there exists a semantic gap between different scenarios. To address this issue, we introduce an image caption-based method to enhance the model's ability to capture semantic information from images. Formally, we integrate image description information into the text to bridge the semantic gap between text and images. Moreover, to optimize image utilization and enhance the semantic interaction between images and text, we combine global and object features from the images for the final representation. Finally, we leverage a transformer to fuse the above multi-modal content. We carried out extensive experiments on two publicly available datasets, and the results show that our proposed method significantly improves performance compared to other existing methods.

SELECTION OF CITATIONS
SEARCH DETAIL