RESUMEN
Social media is widely used globally by patients, families of patients, health professionals, scientists, and other stakeholders who seek and share information related to cancer. Despite many benefits of social media for cancer care and research, there is also a substantial risk of exposure to misinformation, or inaccurate information about cancer. Types of misinformation vary from inaccurate information about cancer risk factors or unproven treatment options to conspiracy theories and public relations articles or advertisements appearing as reliable medical content. Many characteristics of social media networks-such as their extensive use and the relative ease it allows to share information quickly-facilitate the spread of misinformation. Research shows that inaccurate and misleading health-related posts on social media often get more views and engagement (e.g., likes, shares) from users compared with accurate information. Exposure to misinformation can have downstream implications for health-related attitudes and behaviors. However, combatting misinformation is a complex process that requires engagement from media platforms, scientific and health experts, governmental organizations, and the general public. Cancer experts, for example, should actively combat misinformation in real time and should disseminate evidence-based content on social media. Health professionals should give information prescriptions to patients and families and support health literacy. Patients and families should vet the quality of cancer information before acting upon it (e.g., by using publicly available checklists) and seek recommended resources from health care providers and trusted organizations. Future multidisciplinary research is needed to identify optimal ways of building resilience and combating misinformation across social media.
Asunto(s)
Comunicación , Neoplasias , Medios de Comunicación Sociales , Humanos , Neoplasias/psicología , Neoplasias/terapia , Difusión de la Información/métodosRESUMEN
A great deal of empirical research has examined who falls for misinformation and why. Here, we introduce a formal game-theoretic model of engagement with news stories that captures the strategic interplay between (mis)information consumers and producers. A key insight from the model is that observed patterns of engagement do not necessarily reflect the preferences of consumers. This is because producers seeking to promote misinformation can use strategies that lead moderately inattentive readers to engage more with false stories than true ones-even when readers prefer more accurate over less accurate information. We then empirically test people's preferences for accuracy in the news. In three studies, we find that people strongly prefer to click and share news they perceive as more accurate-both in a general population sample, and in a sample of users recruited through Twitter who had actually shared links to misinformation sites online. Despite this preference for accurate news-and consistent with the predictions of our model-we find markedly different engagement patterns for articles from misinformation versus mainstream news sites. Using 1,000 headlines from 20 misinformation and 20 mainstream news sites, we compare Facebook engagement data with 20,000 accuracy ratings collected in a survey experiment. Engagement with a headline is negatively correlated with perceived accuracy for misinformation sites, but positively correlated with perceived accuracy for mainstream sites. Taken together, these theoretical and empirical results suggest that consumer preferences cannot be straightforwardly inferred from empirical patterns of engagement.
Asunto(s)
Comportamiento del Consumidor , Medios de Comunicación Sociales , Humanos , Comunicación , Encuestas y Cuestionarios , Cognición , Investigación EmpíricaRESUMEN
Metacognition, our ability to reflect on our own beliefs, manifests itself in the confidence we have in these beliefs, and helps us guide our behavior in complex and uncertain environments. Here, we provide empirical tests of the importance of metacognition during the pandemic. Bayesian and frequentist analyses demonstrate that citizens with higher metacognitive sensitivity-where confidence differentiates correct from incorrect COVID-19 beliefs-reported higher willingness to vaccinate against COVID-19, and higher compliance with recommended public health measures. Notably, this benefit of accurate introspection held controlling for the accuracy of COVID-19 beliefs. By demonstrating how vaccination willingness and compliance may relate to insight into the varying accuracy of beliefs, rather than only the accuracy of the beliefs themselves, this research highlights the critical role of metacognitive ability in times of crisis. However, we do not find sufficient evidence to conclude that citizens with higher metacognitive sensitivity were more likely to comply with recommended public health measures when controlling for the absolute level of the confidence citizens had in their COVID-19 beliefs.
Asunto(s)
COVID-19 , Metacognición , Humanos , Teorema de Bayes , Salud Pública , COVID-19/epidemiología , COVID-19/prevención & control , IncertidumbreRESUMEN
Why do people share misinformation on social media? In this research (N = 2,476), we show that the structure of online sharing built into social platforms is more important than individual deficits in critical reasoning and partisan bias-commonly cited drivers of misinformation. Due to the reward-based learning systems on social media, users form habits of sharing information that attracts others' attention. Once habits form, information sharing is automatically activated by cues on the platform without users considering response outcomes such as spreading misinformation. As a result of user habits, 30 to 40% of the false news shared in our research was due to the 15% most habitual news sharers. Suggesting that sharing of false news is part of a broader response pattern established by social media platforms, habitual users also shared information that challenged their own political beliefs. Finally, we show that sharing of false news is not an inevitable consequence of user habits: Social media sites could be restructured to build habits to share accurate information.
Asunto(s)
Comunicación , Medios de Comunicación Sociales , Humanos , Difusión de la Información , Solución de ProblemasRESUMEN
Understanding the mechanisms by which information and misinformation spread through groups of individual actors is essential to the prediction of phenomena ranging from coordinated group behaviors to misinformation epidemics. Transmission of information through groups depends on the rules that individuals use to transform the perceived actions of others into their own behaviors. Because it is often not possible to directly infer decision-making strategies in situ, most studies of behavioral spread assume that individuals make decisions by pooling or averaging the actions or behavioral states of neighbors. However, whether individuals may instead adopt more sophisticated strategies that exploit socially transmitted information, while remaining robust to misinformation, is unknown. Here, we study the relationship between individual decision-making and misinformation spread in groups of wild coral reef fish, where misinformation occurs in the form of false alarms that can spread contagiously through groups. Using automated visual field reconstruction of wild animals, we infer the precise sequences of socially transmitted visual stimuli perceived by individuals during decision-making. Our analysis reveals a feature of decision-making essential for controlling misinformation spread: dynamic adjustments in sensitivity to socially transmitted cues. This form of dynamic gain control can be achieved by a simple and biologically widespread decision-making circuit, and it renders individual behavior robust to natural fluctuations in misinformation exposure.
Asunto(s)
Animales Salvajes , Epidemias , Animales , Comunicación , Peces , Campos VisualesRESUMEN
Following the 2020 general election, Republican elected officials, including then-President Donald Trump, promoted conspiracy theories claiming that Joe Biden's close victory in Georgia was fraudulent. Such conspiratorial claims could implicate participation in the Georgia Senate runoff election in different ways-signaling that voting doesn't matter, distracting from ongoing campaigns, stoking political anger at out-partisans, or providing rationalizations for (lack of) enthusiasm for voting during a transfer of power. Here, we evaluate the possibility of any on-average relationship with turnout by combining behavioral measures of engagement with election conspiracies online and administrative data on voter turnout for 40,000 Twitter users registered to vote in Georgia. We find small, limited associations. Liking or sharing messages opposed to conspiracy theories was associated with higher turnout than expected in the runoff election, and those who liked or shared tweets promoting fraud-related conspiracy theories were slightly less likely to vote.
Asunto(s)
Comunicación , Fraude , Política , Georgia , Humanos , Estudios LongitudinalesRESUMEN
We study how communication platforms can improve social learning without censoring or fact-checking messages, when they have members who deliberately and/or inadvertently distort information. Message fidelity depends on social network depth (how many times information can be relayed) and breadth (the number of others with whom a typical user shares information). We characterize how the expected number of true minus false messages depends on breadth and depth of the network and the noise structure. Message fidelity can be improved by capping depth or, if that is not possible, limiting breadth, e.g., by capping the number of people to whom someone can forward a given message. Although caps reduce total communication, they increase the fraction of received messages that have traveled shorter distances and have had less opportunity to be altered, thereby increasing the signal-to-noise ratio.
Asunto(s)
Difusión de la Información , Medios de Comunicación Sociales , Red Social , Humanos , Difusión de la Información/ética , Aprendizaje/ética , Medios de Comunicación Sociales/ética , Medios de Comunicación Sociales/organización & administración , Medios de Comunicación Sociales/estadística & datos numéricosRESUMEN
Retracted papers often circulate widely on social media, digital news, and other websites before their official retraction. The spread of potentially inaccurate or misleading results from retracted papers can harm the scientific community and the public. Here, we quantify the amount and type of attention 3,851 retracted papers received over time in different online platforms. Comparing with a set of nonretracted control papers from the same journals with similar publication year, number of coauthors, and author impact, we show that retracted papers receive more attention after publication not only on social media but also, on heavily curated platforms, such as news outlets and knowledge repositories, amplifying the negative impact on the public. At the same time, we find that posts on Twitter tend to express more criticism about retracted than about control papers, suggesting that criticism-expressing tweets could contain factual information about problematic papers. Most importantly, around the time they are retracted, papers generate discussions that are primarily about the retraction incident rather than about research findings, showing that by this point, papers have exhausted attention to their results and highlighting the limited effect of retractions. Our findings reveal the extent to which retracted papers are discussed on different online platforms and identify at scale audience criticism toward them. In this context, we show that retraction is not an effective tool to reduce online attention to problematic papers.
RESUMEN
The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model's prediction are more accurate than either alone, but inaccurate model predictions often decrease participants' accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants' performance while mostly not affecting the model's performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance.
Asunto(s)
Inteligencia Artificial , Comunicación , Decepción , Reconocimiento Facial , Ciencias Forenses , Humanos , Medios de Comunicación Sociales , Grabación en VideoRESUMEN
Exposure to misleading information after witnessing an event can impair future memory reports about the event. This pervasive form of memory distortion, termed the misinformation effect, can be significantly reduced if individuals are warned about the reliability of post-event information before exposure to misleading information. The present fMRI study investigated whether such prewarnings improve subsequent memory accuracy by influencing encoding-related neural activity during exposure to misinformation. We employed a repeated retrieval misinformation paradigm in which participants watched a crime video (Witnessed Event), completed an initial test of memory, listened to a post-event auditory narrative that contained consistent, neutral, and misleading details (Post-Event Information), and then completed a final test of memory. At the behavioral level, participants who were given a prewarning before the Post-Event Information were less susceptible to misinformation on the final memory test compared with participants who were not given a warning (Karanian et al., Proceedings of the National Academy of Sciences of the United States of America, 117, 22771-22779, 2020). This protection from misinformation was accompanied by greater activity in frontal regions associated with source encoding (lateral PFC) and conflict detection (ACC) during misleading trials as well as a more global reduction in activity in auditory cortex and semantic processing regions (left inferior frontal gyrus) across all trials (consistent, neutral, misleading) of the Post-Event Information narrative. Importantly, the strength of these warning-related activity modulations was associated with better protection from misinformation on the final memory test (improved memory accuracy on misleading trials). Together, these results suggest that warnings modulate encoding-related neural activity during exposure to misinformation to improve memory accuracy.
Asunto(s)
Imagen por Resonancia Magnética , Recuerdo Mental , Humanos , Femenino , Masculino , Adulto Joven , Adulto , Recuerdo Mental/fisiología , Comunicación , Decepción , Corteza Prefrontal/fisiología , Corteza Prefrontal/diagnóstico por imagen , Adolescente , Mapeo Encefálico , Memoria/fisiologíaRESUMEN
OBJECTIVE: Understand if cancer fatalism among adult social media users in the United States is linked to social media informational awareness and if the relationship varies by education level. METHODS: Cross-sectional data from the 2022 Health Information National Trends Survey (n = 3,948) were analyzed using multivariable linear probability models. The study population was defined as social media users active within the past year. The outcome variable was cancer fatalism and the predictor variables were social media informational awareness and education level. RESULTS: Participants with low social media informational awareness were 9% (95% CI = 3, 15), 6% (95% CI = 1, 11), and 21% (95% CI = 14, 27) percentage points more likely to agree that it seems like everything causes cancer, you cannot lower your chances of getting cancer, and there are too many cancer prevention recommendations to follow, respectively. Participants with a college degree or higher level of education and who reported high social media informational awareness were the least likely to agree that everything causes cancer (60%; 95% CI = 54, 66), you cannot lower your chances of getting cancer (14%; 95% CI = 10, 19), and there are too many cancer prevention recommendations to follow (52%; 95% CI = 46, 59). CONCLUSION: Social media informational awareness was associated with lower levels of cancer fatalism among adult social media users. College graduates with high social media informational awareness were the least likely to report cancer fatalism.
Asunto(s)
Conocimientos, Actitudes y Práctica en Salud , Neoplasias , Medios de Comunicación Sociales , Humanos , Medios de Comunicación Sociales/estadística & datos numéricos , Masculino , Femenino , Estudios Transversales , Neoplasias/epidemiología , Neoplasias/prevención & control , Neoplasias/mortalidad , Adulto , Persona de Mediana Edad , Estados Unidos/epidemiología , Adulto Joven , Escolaridad , Concienciación , Anciano , Adolescente , Encuestas y CuestionariosRESUMEN
With the rapid spread of information via social media, individuals are prone to misinformation exposure that they may utilize when forming beliefs. Over five experiments (total N = 815 adults, recruited through Amazon Mechanical Turk in the United States), we investigated whether people could ignore quantitative information when they judged for themselves that it was misreported. Participants recruited online viewed sets of values sampled from Gaussian distributions to estimate the underlying means. They attempted to ignore invalid information, which were outlier values inserted into the value sequences. Results indicated participants were able to detect outliers. Nevertheless, participants' estimates were still biased in the direction of the outlier, even when they were most certain that they detected invalid information. The addition of visual warning cues and different task scenarios did not fully eliminate systematic over- and underestimation. These findings suggest that individuals may incorporate invalid information they meant to ignore when forming beliefs.
Asunto(s)
Comunicación , Señales (Psicología) , Adulto , Humanos , Estados UnidosRESUMEN
The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people's news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this preregistered adversarial collaboration, we tested this question using a multiverse meta-analysis (k = 21; N = 27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline "evaluation" treatments (a critical test for one research team) such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.
Asunto(s)
Política , HumanosRESUMEN
BACKGROUND: Personal characteristics may be associated with believing misinformation and not believing in best practices to protect oneself from COVID-19. OBJECTIVE: To examine the associations of a person's age, race/ethnicity, education, residence, health literacy, medical mistrust level, and sources of health-related information with their COVID-19 health and conspiracy myth beliefs. DESIGN: We surveyed adults with hypertension in Maryland and Pennsylvania between August 2020 and March 2021. Incorrect responses were summed for eight health (mean = 0.68; range 0-5) and two conspiracy (mean = 0.92; range 0-2) COVID-19 questions. Higher scores indicated more incorrect responses. Statistical analyses included two-sample t-tests, Spearman's correlation, and log binomial regression. PARTICIPANTS: In total, 561 primary care patients (mean age = 62.3 years, 60.2% female, 46.0% Black, 10.2% Hispanic, 28.2% with a Bachelor's degree or higher, 42.8% with annual household income less than $60,000) with a diagnosis of hypertension and at least one of five commonly associated conditions. MAIN MEASURES: Sociodemographic characteristics, health literacy, medical mistrust level, source of health-related information, and COVID-19 conspiracy and health myth beliefs. KEY RESULTS: In multivariable analyses, participants who did not get information from medical professional sources (prevalence ratio (PR) = 1.28; 95% CI = 1.06-1.55), had less than a bachelor's degree (PR = 1.49; 95% CI = 1.12-1.99), were less confident filling out medical forms (PR = 1.24; 95% CI = 1.02-1.50), and had higher medical mistrust (PR = 1.34; 95% CI = 1.05-1.69) were more likely to believe any health myths. Participants who had less than a bachelor's degree (PR = 1.22; 95% CI = 1.02-1.45), were less confident filling out medical forms (PR = 1.21; 95% CI = 1.09-1.34), and had higher medical mistrust (PR = 1.72; 95% CI = 1.43-2.06) were more likely to believe any conspiracy myths. CONCLUSIONS: Lower educational attainment and health literacy, greater medical mistrust, and certain sources of health information are associated with misinformed COVID-19 beliefs. Programs addressing misinformation should focus on groups affected by these social determinants of health by encouraging reliance on scientific sources.
RESUMEN
With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.
Asunto(s)
Trastornos Mentales , Psiquiatría , Humanos , Inteligencia Artificial , Psiquiatras , ComunicaciónRESUMEN
OBJECTIVE: This study explored the relationship between perceptions of health mis/disinformation on social media and belief that progress has been made in curing cancer. METHODS: We analyzed cross-sectional, retrospective data collected from 4246 adult social media users in the 2022 Health Information National Trends Survey (HINTS 6). The outcome variable was the belief in whether progress has been made in curing cancer. The primary predictor variable was the perception of health mis/disinformation on social media, categorized as 'Substantial' and '< Substantial'. We also examined whether the relationship varied by health care system trust, frequency of social media use, and education. The analysis controlled for demographic, socioeconomic, and health-related factors. RESULTS: Perception of substantial social media health mis- and disinformation was associated with a lower likelihood of believing progress has been made in curing cancer (odds ratios = 0.74, 95% CI = 0.59-0.94). Persons who perceived substantial social media health mis-and disinformation and had low trust in the health care system were less likely to believe progress has been made in curing cancer: 36% (95% CI: 28-45%). Persons who perceived substantial social media health mis-and disinformation and used social media less than daily were less likely to believe progress has been made in curing cancer: 44% (95% CI: 36-52%). Persons without a college degree who perceived substantial social media health mis-and disinformation were less likely to agree that progress has been made in curing cancer: 44% (95% CI: 39-50%). CONCLUSION: Exposure to misinformation on social media may be associated with negative attitudes about advances in curing cancer, particularly among social media users with low trust in the health care system trust, less frequent social media users, or those without a college degree.
Asunto(s)
Neoplasias , Medios de Comunicación Sociales , Confianza , Humanos , Medios de Comunicación Sociales/estadística & datos numéricos , Estudios Transversales , Masculino , Femenino , Confianza/psicología , Neoplasias/psicología , Adulto , Persona de Mediana Edad , Estudios Retrospectivos , Atención a la Salud , Conocimientos, Actitudes y Práctica en Salud , Adulto Joven , Encuestas y Cuestionarios , AncianoRESUMEN
There are many misconceptions about Prolonged Grief Disorder (PGD). We show with data that PGD is a diagnosis that applies to a rare few of mourners who are at risk of significant distress and dysfunction. Those mourners who meet criteria for PGD have been shown to benefit from specialized, targeted treatment for it. The case against PGD is empirically unsubstantiated, and the need for scientific examination of effective treatments is warranted.
Asunto(s)
Aflicción , Humanos , Pesar , Trastorno de Duelo ProlongadoRESUMEN
BACKGROUND: Inaccurate cancer news can have adverse effects on patients and families. One potential way to minimize this is through media literacy training-ideally, training tailored specifically to the evaluation of health-related media coverage. PURPOSE: We test whether an abbreviated health-focused media literacy intervention improves accuracy discernment or sharing discernment for cancer news headlines and also examine how these outcomes compare to the effects of a generic media literacy intervention. METHODS: We employ a survey experiment conducted using a nationally representative sample of Americans (N = 1,200). Respondents were assigned to either a health-focused media literacy intervention, a previously tested generic media literacy intervention, or the control. They were also randomly assigned to rate either perceived accuracy of headlines or sharing intentions. Intervention effects on accurate and inaccurate headline ratings were tested using OLS regressions at the item-response level, with standard errors clustered on the respondent and with headline fixed effects. RESULTS: We find that the health-focused media literacy intervention increased skepticism of both inaccurate (a 5.6% decrease in endorsement, 95% CI [0.1%, 10.7%]) and accurate (a 7.6% decrease, 95% CI [2.4%, 12.8%]) news headlines, and accordingly did not improve discernment between the two. The health-focused media literacy intervention also did not significantly improve sharing discernment. Meanwhile, the generic media literacy intervention had little effect on perceived accuracy outcomes, but did significantly improve sharing discernment. CONCLUSIONS: These results suggest further intervention development and refinement are needed before scaling up similarly targeted health information literacy tools, particularly focusing on building trust in legitimate sources and accurate content.
This study investigated how media literacy training affects people's ability to accurately judge cancer-related news. Specifically, we tested whether health-specific media literacy guidelines could help people better identify accurate versus inaccurate cancer news headlines compared to a set of general media literacy guidelines. Using a survey with 1,200 Americans, participants were divided into three groups: one received health-focused media literacy training, another received general media literacy training, and a third group had no training. Participants were then asked to evaluate or consider sharing a series of accurate and inaccurate news headlines. The study found that the health-focused media literacy training made people more skeptical of both accurate and inaccurate headlines. Meanwhile, the general media literacy guidelines had little effect on perceived accuracy of headlines but did significantly improve the quality of news people said they would share, on average. The findings suggest that more work is needed to improve media literacy programs, especially those focused on health news, to help people trust and recognize accurate information.
RESUMEN
BACKGROUND: Social media is a popular source of information about food and nutrition. There is a high degree of inaccurate and poor-quality nutrition-related information present online. The aim of this study was to evaluate the quality and accuracy of nutrition-related information posted by popular Australian Instagram accounts and examine trends in quality and accuracy based on author, topic, post engagement, account verification and number of followers. METHODS: A sample of posts by Australian Instagram accounts with ≥ 100,000 followers who primarily posted about nutrition was collected between September 2020 and September 2021. Posts containing nutrition-related information were evaluated to determine the quality and accuracy of the information. Quality was assessed using the Principles for Health-Related Information on Social Media tool and accuracy was assessed against information contained in the Australian Dietary Guidelines, Practice-based Evidence in Nutrition database, Nutrient Reference Values and Metafact. RESULTS: A total of 676 posts were evaluated for quality and 510 posts for accuracy, originating from 47 Instagram accounts. Overall, 34.8% of posts were classified as being of poor quality, 59.2% mediocre, 6.1% good and no posts were of excellent quality. A total of 44.7% of posts contained inaccuracies. Posts authored by nutritionists or dietitians were associated with higher quality scores (ß, 17.8, CI 13.94-21.65; P < 0.001) and higher accuracy scores (OR 4.69, CI 1.81-12.14, P = 0.001) compared to brands and other accounts. Information about supplements was of lower accuracy (OR 0.23, CI 0.10-0.51, P < 0.001) compared to information about weight loss and other nutrition topics. Engagement tended to be higher for posts of lower quality (ß -0.59, P = 0.012), as did engagement rate (ß -0.57, P = 0.016). There was no relationship between followers or account verification and information quality or accuracy and no relationship between engagement and accuracy. CONCLUSIONS: Nutrition-related information published by influential Australian Instagram accounts is often inaccurate and of suboptimal quality. Information about supplements and posts by brand accounts is of the lowest quality and accuracy and information posted by nutritionists and dietitians is of a higher standard. Instagram users are at risk of being misinformed when engaging with Australian Instagram content for information about nutrition.
Asunto(s)
Estado Nutricional , Medios de Comunicación Sociales , Humanos , Australia , Nutrientes , Bases de Datos Factuales , Suplementos DietéticosRESUMEN
Though scientific consensus regarding HIV causation of AIDS was reached decades ago, denial of this conclusion remains. The popularity of such denial has waxed and waned over the years, ebbing as evidence supporting HIV causation mounted, building again as the internet facilitated connection between denial groups and the general public, and waning following media attention to the death of a prominent denier and her child and data showing the cost of human life in South Africa. Decades removed from these phenomena, HIV denial is experiencing another resurgence, coupled to mounting distrust of public health, pharmaceutical companies, and mainstream medicine. This paper examines the history and current state of HIV denial in the context of the COVID pandemic and its consequences. An understanding of the effect of this phenomenon and evidence-based ways to counter it are lacking. Community-based interventions and motivational interviewing may serve to contain such misinformation in high-risk communities.