Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(5): e0302201, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38776260

RESUMO

The world's digital information ecosystem continues to struggle with the spread of misinformation. Prior work has suggested that users who consistently disseminate a disproportionate amount of low-credibility content-so-called superspreaders-are at the center of this problem. We quantitatively confirm this hypothesis and introduce simple metrics to predict the top superspreaders several months into the future. We then conduct a qualitative review to characterize the most prolific superspreaders and analyze their sharing behaviors. Superspreaders include pundits with large followings, low-credibility media outlets, personal accounts affiliated with those media outlets, and a range of influencers. They are primarily political in nature and use more toxic language than the typical user sharing misinformation. We also find concerning evidence that suggests Twitter may be overlooking prominent superspreaders. We hope this work will further public understanding of bad actors and promote steps to mitigate their negative impacts on healthy digital discourse.


Assuntos
Disseminação de Informação , Mídias Sociais , Humanos , Disseminação de Informação/métodos , Comunicação
2.
Sci Rep ; 13(1): 20707, 2023 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-38001150

RESUMO

Automated accounts on social media that impersonate real users, often called "social bots," have received a great deal of attention from academia and the public. Here we present experiments designed to investigate public perceptions and policy preferences about social bots, in particular how they are affected by exposure to bots. We find that before exposure, participants have some biases: they tend to overestimate the prevalence of bots and see others as more vulnerable to bot influence than themselves. These biases are amplified after bot exposure. Furthermore, exposure tends to impair judgment of bot-recognition self-efficacy and increase propensity toward stricter bot-regulation policies among participants. Decreased self-efficacy and increased perceptions of bot influence on others are significantly associated with these policy preference changes. We discuss the relationship between perceptions about social bots and growing dissatisfaction with the polluted social media environment.


Assuntos
Mídias Sociais , Software , Humanos , Políticas , Viés , Prevalência
3.
JMIR Infodemiology ; 3: e44207, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37012998

RESUMO

Background: An infodemic is excess information, including false or misleading information, that spreads in digital and physical environments during a public health emergency. The COVID-19 pandemic has been accompanied by an unprecedented global infodemic that has led to confusion about the benefits of medical and public health interventions, with substantial impact on risk-taking and health-seeking behaviors, eroding trust in health authorities and compromising the effectiveness of public health responses and policies. Standardized measures are needed to quantify the harmful impacts of the infodemic in a systematic and methodologically robust manner, as well as harmonizing highly divergent approaches currently explored for this purpose. This can serve as a foundation for a systematic, evidence-based approach to monitoring, identifying, and mitigating future infodemic harms in emergency preparedness and prevention. Objective: In this paper, we summarize the Fifth World Health Organization (WHO) Infodemic Management Conference structure, proceedings, outcomes, and proposed actions seeking to identify the interdisciplinary approaches and frameworks needed to enable the measurement of the burden of infodemics. Methods: An iterative human-centered design (HCD) approach and concept mapping were used to facilitate focused discussions and allow for the generation of actionable outcomes and recommendations. The discussions included 86 participants representing diverse scientific disciplines and health authorities from 28 countries across all WHO regions, along with observers from civil society and global public health-implementing partners. A thematic map capturing the concepts matching the key contributing factors to the public health burden of infodemics was used throughout the conference to frame and contextualize discussions. Five key areas for immediate action were identified. Results: The 5 key areas for the development of metrics to assess the burden of infodemics and associated interventions included (1) developing standardized definitions and ensuring the adoption thereof; (2) improving the map of concepts influencing the burden of infodemics; (3) conducting a review of evidence, tools, and data sources; (4) setting up a technical working group; and (5) addressing immediate priorities for postpandemic recovery and resilience building. The summary report consolidated group input toward a common vocabulary with standardized terms, concepts, study designs, measures, and tools to estimate the burden of infodemics and the effectiveness of infodemic management interventions. Conclusions: Standardizing measurement is the basis for documenting the burden of infodemics on health systems and population health during emergencies. Investment is needed into the development of practical, affordable, evidence-based, and systematic methods that are legally and ethically balanced for monitoring infodemics; generating diagnostics, infodemic insights, and recommendations; and developing interventions, action-oriented guidance, policies, support options, mechanisms, and tools for infodemic managers and emergency program managers.

4.
J Med Internet Res ; 25: e42227, 2023 02 24.
Artigo em Inglês | MEDLINE | ID: mdl-36735835

RESUMO

BACKGROUND: Vaccinations play a critical role in mitigating the impact of COVID-19 and other diseases. Past research has linked misinformation to increased hesitancy and lower vaccination rates. Gaps remain in our knowledge about the main drivers of vaccine misinformation on social media and effective ways to intervene. OBJECTIVE: Our longitudinal study had two primary objectives: (1) to investigate the patterns of prevalence and contagion of COVID-19 vaccine misinformation on Twitter in 2021, and (2) to identify the main spreaders of vaccine misinformation. Given our initial results, we further considered the likely drivers of misinformation and its spread, providing insights for potential interventions. METHODS: We collected almost 300 million English-language tweets related to COVID-19 vaccines using a list of over 80 relevant keywords over a period of 12 months. We then extracted and labeled news articles at the source level based on third-party lists of low-credibility and mainstream news sources, and measured the prevalence of different kinds of information. We also considered suspicious YouTube videos shared on Twitter. We focused our analysis of vaccine misinformation spreaders on verified and automated Twitter accounts. RESULTS: Our findings showed a relatively low prevalence of low-credibility information compared to the entirety of mainstream news. However, the most popular low-credibility sources had reshare volumes comparable to those of many mainstream sources, and had larger volumes than those of authoritative sources such as the US Centers for Disease Control and Prevention and the World Health Organization. Throughout the year, we observed an increasing trend in the prevalence of low-credibility news about vaccines. We also observed a considerable amount of suspicious YouTube videos shared on Twitter. Tweets by a small group of approximately 800 "superspreaders" verified by Twitter accounted for approximately 35% of all reshares of misinformation on an average day, with the top superspreader (@RobertKennedyJr) responsible for over 13% of retweets. Finally, low-credibility news and suspicious YouTube videos were more likely to be shared by automated accounts. CONCLUSIONS: The wide spread of misinformation around COVID-19 vaccines on Twitter during 2021 shows that there was an audience for this type of content. Our findings are also consistent with the hypothesis that superspreaders are driven by financial incentives that allow them to profit from health misinformation. Despite high-profile cases of deplatformed misinformation superspreaders, our results show that in 2021, a few individuals still played an outsized role in the spread of low-credibility vaccine content. As a result, social media moderation efforts would be better served by focusing on reducing the online visibility of repeat spreaders of harmful content, especially during public health crises.


Assuntos
COVID-19 , Mídias Sociais , Vacinas , Humanos , Vacinas contra COVID-19 , Estudos Longitudinais , Comunicação
5.
J Comput Soc Sci ; 5(2): 1511-1528, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36035522

RESUMO

Social bots have become an important component of online social media. Deceptive bots, in particular, can manipulate online discussions of important issues ranging from elections to public health, threatening the constructive exchange of information. Their ubiquity makes them an interesting research subject and requires researchers to properly handle them when conducting studies using social media data. Therefore, it is important for researchers to gain access to bot detection tools that are reliable and easy to use. This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning. We introduce how Botometer works, the different ways users can access it, and present a case study as a demonstration. Readers can use the case study code as a template for their own research. We also discuss recommended practice for using Botometer.

6.
PeerJ Comput Sci ; 8: e1025, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35875635

RESUMO

Online social media are key platforms for the public to discuss political issues. As a result, researchers have used data from these platforms to analyze public opinions and forecast election results. The literature has shown that due to inauthentic actors such as malicious social bots and trolls, not every message is a genuine expression from a legitimate user. However, the prevalence of inauthentic activities in social data streams is still unclear, making it difficult to gauge biases of analyses based on such data. In this article, we aim to close this gap using Twitter data from the 2018 U.S. midterm elections. We propose an efficient and low-cost method to identify voters on Twitter and systematically compare their behaviors with different random samples of accounts. We find that some accounts flood the public data stream with political content, drowning the voice of the majority of voters. As a result, these hyperactive accounts are over-represented in volume samples. Hyperactive accounts are more likely to exhibit various suspicious behaviors and to share low-credibility information compared to likely voters. Our work provides insights into biased voter characterizations when using social media data to analyze political issues.

7.
Sci Rep ; 12(1): 5966, 2022 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-35474313

RESUMO

Widespread uptake of vaccines is necessary to achieve herd immunity. However, uptake rates have varied across U.S. states during the first six months of the COVID-19 vaccination program. Misbeliefs may play an important role in vaccine hesitancy, and there is a need to understand relationships between misinformation, beliefs, behaviors, and health outcomes. Here we investigate the extent to which COVID-19 vaccination rates and vaccine hesitancy are associated with levels of online misinformation about vaccines. We also look for evidence of directionality from online misinformation to vaccine hesitancy. We find a negative relationship between misinformation and vaccination uptake rates. Online misinformation is also correlated with vaccine hesitancy rates taken from survey data. Associations between vaccine outcomes and misinformation remain significant when accounting for political as well as demographic and socioeconomic factors. While vaccine hesitancy is strongly associated with Republican vote share, we observe that the effect of online misinformation on hesitancy is strongest across Democratic rather than Republican counties. Granger causality analysis shows evidence for a directional relationship from online misinformation to vaccine hesitancy. Our results support a need for interventions that address misbeliefs, allowing individuals to make better-informed health decisions.


Assuntos
COVID-19 , Vacinas , COVID-19/prevenção & controle , Vacinas contra COVID-19 , Comunicação , Humanos , Aceitação pelo Paciente de Cuidados de Saúde , Vacinação , Hesitação Vacinal
8.
Nat Hum Behav ; 6(4): 495-505, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35115677

RESUMO

Newsfeed algorithms frequently amplify misinformation and other low-quality content. How can social media platforms more effectively promote reliable information? Existing approaches are difficult to scale and vulnerable to manipulation. In this paper, we propose using the political diversity of a website's audience as a quality signal. Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 US residents, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards. We then incorporate audience diversity into a standard collaborative filtering framework and show that our improved algorithm increases the trustworthiness of websites suggested to users-especially those who most frequently consume misinformation-while keeping recommendations relevant. These findings suggest that partisan audience diversity is a valuable signal of higher journalistic standards that should be incorporated into algorithmic ranking decisions.


Assuntos
Mídias Sociais , Comunicação , Humanos , Reprodutibilidade dos Testes
10.
Nat Commun ; 12(1): 5580, 2021 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-34552073

RESUMO

Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections. The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center. Partisan accounts, especially conservative ones, tend to receive more followers and follow more automated accounts. Conservative accounts also find themselves in denser communities and are exposed to more low-credibility content.


Assuntos
Política , Mídias Sociais , Viés , Comunicação , Humanos , Robótica , Rede Social , Estados Unidos
11.
Nat Commun ; 9(1): 4787, 2018 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-30459415

RESUMO

The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.


Assuntos
Comunicação , Mídias Sociais/estatística & dados numéricos , Mídias Sociais/normas , Rede Social , Coleta de Dados/métodos , Coleta de Dados/estatística & dados numéricos , Humanos , Disseminação de Informação/métodos
12.
Sci Rep ; 8(1): 15951, 2018 10 29.
Artigo em Inglês | MEDLINE | ID: mdl-30374134

RESUMO

Algorithms that favor popular items are used to help us select among many choices, from top-ranked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, credible information sources, and important discoveries-in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content "bubble up" in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of a cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the trade-off between quality and popularity. Below and above a critical exploration cost, popularity bias is more likely to hinder quality. But we find a narrow intermediate regime of user attention where an optimal balance exists: choosing what is popular can help promote high-quality items to the top. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.


Assuntos
Algoritmos , Controle de Qualidade , Mídias Sociais
13.
PLoS One ; 13(4): e0196087, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29702657

RESUMO

Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.


Assuntos
Comunicação , Mídias Sociais , Inteligência Artificial , Humanos , Disseminação de Informação , Política , Estados Unidos
16.
PLoS One ; 10(6): e0128193, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26083336

RESUMO

Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.


Assuntos
Algoritmos , Conhecimento , Área Sob a Curva , Humanos , Curva ROC
17.
Sci Rep ; 5: 9452, 2015 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-25989177

RESUMO

Online traces of human activity offer novel opportunities to study the dynamics of complex knowledge exchange networks, in particular how emergent patterns of collective attention determine what new information is generated and consumed. Can we measure the relationship between demand and supply for new information about a topic? We propose a normalization method to compare attention bursts statistics across topics with heterogeneous distribution of attention. Through analysis of a massive dataset on traffic to Wikipedia, we find that the production of new knowledge is associated to significant shifts of collective attention, which we take as proxy for its demand. This is consistent with a scenario in which allocation of attention toward a topic stimulates the demand for information about it, and in turn the supply of further novel information. However, attention spikes only for a limited time span, during which new content has higher chances of receiving traffic, compared to content created later or earlier on. Our attempt to quantify demand and supply of information, and our finding about their temporal ordering, may lead to the development of the fundamental laws of the attention economy, and to a better understanding of social exchange of knowledge information networks.

18.
PLoS One ; 10(2): e0118410, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25710685

RESUMO

We have a limited understanding of the factors that make people influential and topics popular in social media. Are users who comment on a variety of matters more likely to achieve high influence than those who stay focused? Do general subjects tend to be more popular than specific ones? Questions like these demand a way to detect the topics hidden behind messages associated with an individual or a keyword, and a gauge of similarity among these topics. Here we develop such an approach to identify clusters of similar hashtags in Twitter by detecting communities in the hashtag co-occurrence network. Then the topical diversity of a user's interests is quantified by the entropy of her hashtags across different topic clusters. A similar measure is applied to hashtags, based on co-occurring tags. We find that high topical diversity of early adopters or co-occurring tags implies high future popularity of hashtags. In contrast, low diversity helps an individual accumulate social influence. In short, diverse messages and focused messengers are more likely to gain impact.


Assuntos
Mídias Sociais , Humanos , Internet , Mudança Social , Meio Social
19.
Sci Rep ; 3: 2522, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23982106

RESUMO

How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed spread like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection contributes to our understanding in computational social science, social media analytics, and marketing applications.


Assuntos
Disseminação de Informação , Relações Interpessoais , Modelos Teóricos , Comportamento Social , Rede Social , Apoio Social , Simulação por Computador , Humanos
20.
PLoS One ; 8(5): e64679, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23734215

RESUMO

We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period starting three months prior to the movement's first protest action. The results of this analysis indicate that, on Twitter, the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.


Assuntos
Comunicação , Dissidências e Disputas , Internet , Classe Social , Evolução Biológica , Economia/estatística & dados numéricos , Humanos , Política , Meio Social , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA