Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
Nature ; 630(8015): 45-53, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38840013

ABSTRACT

The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.


Subject(s)
Communication , Disinformation , Internet , Humans , Algorithms , Motivation , Social Media
2.
Nature ; 595(7866): 181-188, 2021 07.
Article in English | MEDLINE | ID: mdl-34194044

ABSTRACT

Computational social science is more than just large repositories of digital data and the computational methods needed to construct and analyse them. It also represents a convergence of different fields with different ways of thinking about and doing science. The goal of this Perspective is to provide some clarity around how these approaches differ from one another and to propose how they might be productively integrated. Towards this end we make two contributions. The first is a schema for thinking about research activities along two dimensions-the extent to which work is explanatory, focusing on identifying and estimating causal effects, and the degree of consideration given to testing predictions of outcomes-and how these two priorities can complement, rather than compete with, one another. Our second contribution is to advocate that computational social scientists devote more attention to combining prediction and explanation, which we call integrative modelling, and to outline some practical suggestions for realizing this goal.


Subject(s)
Computer Simulation , Data Science/methods , Forecasting/methods , Models, Theoretical , Social Sciences/methods , Goals , Humans
3.
Proc Natl Acad Sci U S A ; 121(4): e2309535121, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38227650

ABSTRACT

The notion of common sense is invoked so frequently in contexts as diverse as everyday conversation, political debates, and evaluations of artificial intelligence that its meaning might be surmised to be unproblematic. Surprisingly, however, neither the intrinsic properties of common sense knowledge (what makes a claim commonsensical) nor the degree to which it is shared by people (its "commonness") have been characterized empirically. In this paper, we introduce an analytical framework for quantifying both these elements of common sense. First, we define the commonsensicality of individual claims and people in terms of the latter's propensity to agree on the former and their awareness of one another's agreement. Second, we formalize the commonness of common sense as a clique detection problem on a bipartite belief graph of people and claims, defining [Formula: see text] common sense as the fraction [Formula: see text] of claims shared by a fraction [Formula: see text] of people. Evaluating our framework on a dataset of [Formula: see text] raters evaluating [Formula: see text] diverse claims, we find that commonsensicality aligns most closely with plainly worded, fact-like statements about everyday physical reality. Psychometric attributes such as social perceptiveness influence individual common sense, but surprisingly demographic factors such as age or gender do not. Finally, we find that collective common sense is rare: At most, a small fraction [Formula: see text] of people agree on more than a small fraction [Formula: see text] of claims. Together, these results undercut universalistic beliefs about common sense and raise questions about its variability that are relevant both to human and artificial intelligence.


Subject(s)
Artificial Intelligence , Knowledge , Humans , Psychometrics
4.
Proc Natl Acad Sci U S A ; 121(8): e2313377121, 2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38349876

ABSTRACT

In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals-what a user would have viewed in the absence of algorithmic recommendations-and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call "counterfactual bots" to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with "counterfactual" bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender "forgets" their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.

5.
Proc Natl Acad Sci U S A ; 118(15)2021 04 13.
Article in English | MEDLINE | ID: mdl-33837145

ABSTRACT

Since the 2016 US presidential election, the deliberate spread of misinformation online, and on social media in particular, has generated extraordinary concern, in large part because of its potential effects on public opinion, political polarization, and ultimately democratic decision making. Recently, however, a handful of papers have argued that both the prevalence and consumption of "fake news" per se is extremely low compared with other types of news and news-relevant content. Although neither prevalence nor consumption is a direct measure of influence, this work suggests that proper understanding of misinformation and its effects requires a much broader view of the problem, encompassing biased and misleading-but not necessarily factually incorrect-information that is routinely produced or amplified by mainstream news organizations. In this paper, we propose an ambitious collective research agenda to measure the origins, nature, and prevalence of misinformation, broadly construed, as well as its impact on democracy. We also sketch out some illustrative examples of completed, ongoing, or planned research projects that contribute to this agenda.


Subject(s)
Communication , Democracy , Mass Media/trends , Data Interpretation, Statistical , Deception , Humans , Mass Media/ethics
6.
Proc Natl Acad Sci U S A ; 118(36)2021 09 07.
Article in English | MEDLINE | ID: mdl-34479999

ABSTRACT

Complexity-defined in terms of the number of components and the nature of the interdependencies between them-is clearly a relevant feature of all tasks that groups perform. Yet the role that task complexity plays in determining group performance remains poorly understood, in part because no clear language exists to express complexity in a way that allows for straightforward comparisons across tasks. Here we avoid this analytical difficulty by identifying a class of tasks for which complexity can be varied systematically while keeping all other elements of the task unchanged. We then test the effects of task complexity in a preregistered two-phase experiment in which 1,200 individuals were evaluated on a series of tasks of varying complexity (phase 1) and then randomly assigned to solve similar tasks either in interacting groups or as independent individuals (phase 2). We find that interacting groups are as fast as the fastest individual and more efficient than the most efficient individual for complex tasks but not for simpler ones. Leveraging our highly granular digital data, we define and precisely measure group process losses and synergistic gains and show that the balance between the two switches signs at intermediate values of task complexity. Finally, we find that interacting groups generate more solutions more rapidly and explore the solution space more broadly than independent problem solvers, finding higher-quality solutions than all but the highest-scoring individuals.


Subject(s)
Group Processes , Problem Solving/physiology , Adult , Female , Humans , Individuality , Male , Mass Gatherings , Task Performance and Analysis
7.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Article in English | MEDLINE | ID: mdl-34341121

ABSTRACT

Although it is under-studied relative to other social media platforms, YouTube is arguably the largest and most engaging online media consumption platform in the world. Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical "anti-woke" channels, both of which have been claimed to direct attention to radical political content. Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior, on and off YouTube, from January 2016 through December 2019. Using a labeled set of political news channels, we find that news consumption on YouTube is dominated by mainstream and largely centrist sources. Consumers of far-right content, while more engaged than average, represent a small and stable percentage of news consumers. However, consumption of "anti-woke" content, defined in terms of its opposition to progressive intellectual and political agendas, grew steadily in popularity and is correlated with consumption of far-right content off-platform. We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right. Rather, consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole.


Subject(s)
Politics , Social Media , Humans , Social Media/statistics & numerical data , Video Recording
8.
Proc Natl Acad Sci U S A ; 118(52)2021 12 28.
Article in English | MEDLINE | ID: mdl-34937747

ABSTRACT

In a large-scale, preregistered experiment on informal political communication, we algorithmically matched participants, varying two dimensions: 1) the degree of incidental similarity on nonpolitical features; and 2) their stance agreement on a contentious political topic. Matched participants were first shown a computer-generated social media profile of their match highlighting all the shared nonpolitical features; then, they read a short, personal, but argumentative, essay written by their match about the reduction of inequality via redistribution of wealth by the government. We show that support for redistribution increased and polarization decreased for participants with both mild and strong views, regardless of their political leaning. We further show that feeling close to the match is associated with an 86% increase in the probability of assimilation of political views. Our analysis also uncovers an asymmetry: Interacting with someone with opposite views greatly reduced feelings of closeness; however, interacting with someone with consistent views only moderately increased them. By extending previous work about the effects of incidental similarity and shared identity on affect into the domain of political opinion change, our results bear real-world implications for the (re)-design of social media platforms. Because many people prefer to keep politics outside of their social networks, encouraging cross-cutting political communication based on nonpolitical commonalities is a potential solution for fostering consensus on potentially divisive and partisan topics.


Subject(s)
Attitude , Communication , Politics , Social Media , Humans , Social Environment , Surveys and Questionnaires
9.
Behav Brain Sci ; 47: e65, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38311457

ABSTRACT

Commentaries on the target article offer diverse perspectives on integrative experiment design. Our responses engage three themes: (1) Disputes of our characterization of the problem, (2) skepticism toward our proposed solution, and (3) endorsement of the solution, with accompanying discussions of its implementation in existing work and its potential for other domains. Collectively, the commentaries enhance our confidence in the promise and viability of integrative experiment design, while highlighting important considerations about how it is used.


Subject(s)
Dissent and Disputes
10.
Proc Natl Acad Sci U S A ; 117(32): 18948-18950, 2020 08 11.
Article in English | MEDLINE | ID: mdl-32719133

ABSTRACT

We resolve a controversy over two competing hypotheses about why people object to randomized experiments: 1) People unsurprisingly object to experiments only when they object to a policy or treatment the experiment contains, or 2) people can paradoxically object to experiments even when they approve of implementing either condition for everyone. Using multiple measures of preference and test criteria in five preregistered within-subjects studies with 1,955 participants, we find that people often disapprove of experiments involving randomization despite approving of the policies or treatments to be tested.


Subject(s)
Randomized Controlled Trials as Topic/standards , Research/standards , Ethics, Research , Humans , Random Allocation , Randomized Controlled Trials as Topic/ethics
12.
Proc Natl Acad Sci U S A ; 116(22): 10723-10728, 2019 05 28.
Article in English | MEDLINE | ID: mdl-31072934

ABSTRACT

Randomized experiments have enormous potential to improve human welfare in many domains, including healthcare, education, finance, and public policy. However, such "A/B tests" are often criticized on ethical grounds even as similar, untested interventions are implemented without objection. We find robust evidence across 16 studies of 5,873 participants from three diverse populations spanning nine domains-from healthcare to autonomous vehicle design to poverty reduction-that people frequently rate A/B tests designed to establish the comparative effectiveness of two policies or treatments as inappropriate even when universally implementing either A or B, untested, is seen as appropriate. This "A/B effect" is as strong among those with higher educational attainment and science literacy and among relevant professionals. It persists even when there is no reason to prefer A to B and even when recipients are treated unequally and randomly in all conditions (A, B, and A/B). Several remaining explanations for the effect-a belief that consent is required to impose a policy on half of a population but not on the entire population; an aversion to controlled but not to uncontrolled experiments; and a proxy form of the illusion of knowledge (according to which randomized evaluations are unnecessary because experts already do or should know "what works")-appear to contribute to the effect, but none dominates or fully accounts for it. We conclude that rigorously evaluating policies or treatments via pragmatic randomized trials may provoke greater objection than simply implementing those same policies or treatments untested.


Subject(s)
Ethics, Research , Pragmatic Clinical Trials as Topic , Randomized Controlled Trials as Topic , Humans , Pragmatic Clinical Trials as Topic/ethics , Pragmatic Clinical Trials as Topic/legislation & jurisprudence , Randomized Controlled Trials as Topic/ethics , Randomized Controlled Trials as Topic/legislation & jurisprudence , Treatment Outcome
13.
Behav Brain Sci ; : 1-55, 2022 Dec 21.
Article in English | MEDLINE | ID: mdl-36539303

ABSTRACT

The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment's specific conditions. According to this view, which Alan Newell once characterized as "playing twenty questions with nature," theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. The researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm-and with far greater efficiency.

14.
Behav Res Methods ; 53(5): 2158-2171, 2021 10.
Article in English | MEDLINE | ID: mdl-33782900

ABSTRACT

Virtual labs allow researchers to design high-throughput and macro-level experiments that are not feasible in traditional in-person physical lab settings. Despite the increasing popularity of online research, researchers still face many technical and logistical barriers when designing and deploying virtual lab experiments. While several platforms exist to facilitate the development of virtual lab experiments, they typically present researchers with a stark trade-off between usability and functionality. We introduce Empirica: a modular virtual lab that offers a solution to the usability-functionality trade-off by employing a "flexible defaults" design strategy. This strategy enables us to maintain complete "build anything" flexibility while offering a development platform that is accessible to novice programmers. Empirica's architecture is designed to allow for parameterizable experimental designs, reusable protocols, and rapid development. These features will increase the accessibility of virtual lab experiments, remove barriers to innovation in experiment design, and enable rapid progress in the understanding of human behavior.


Subject(s)
Research Design , Research Personnel , Humans
16.
Proc Natl Acad Sci U S A ; 109(3): 764-9, 2012 Jan 17.
Article in English | MEDLINE | ID: mdl-22184216

ABSTRACT

Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.


Subject(s)
Cooperative Behavior , Learning , Social Support , Decision Making , Female , Humans , Imitative Behavior , Male , Time Factors
17.
Proc Natl Acad Sci U S A ; 109(36): 14363-8, 2012 Sep 04.
Article in English | MEDLINE | ID: mdl-22904193

ABSTRACT

The natural tendency for humans to make and break relationships is thought to facilitate the emergence of cooperation. In particular, allowing conditional cooperators to choose with whom they interact is believed to reinforce the rewards accruing to mutual cooperation while simultaneously excluding defectors. Here we report on a series of human subjects experiments in which groups of 24 participants played an iterated prisoner's dilemma game where, critically, they were also allowed to propose and delete links to players of their own choosing at some variable rate. Over a wide variety of parameter settings and initial conditions, we found that dynamic partner updating significantly increased the level of cooperation, the average payoffs to players, and the assortativity between cooperators. Even relatively slow update rates were sufficient to produce large effects, while subsequent increases to the update rate had progressively smaller, but still positive, effects. For standard prisoner's dilemma payoffs, we also found that assortativity resulted predominantly from cooperators avoiding defectors, not by severing ties with defecting partners, and that cooperation correspondingly suffered. Finally, by modifying the payoffs to satisfy two novel conditions, we found that cooperators did punish defectors by severing ties, leading to higher levels of cooperation that persisted for longer.


Subject(s)
Choice Behavior , Cooperative Behavior , Game Theory , Interpersonal Relations , Models, Psychological , Humans , Reward
18.
Nat Comput Sci ; 4(6): 398-411, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38898315

ABSTRACT

Large-scale GPS location datasets hold immense potential for measuring human mobility and interpersonal contact, both of which are essential for data-driven epidemiology. However, despite their potential and widespread adoption during the COVID-19 pandemic, there are several challenges with these data that raise concerns regarding the validity and robustness of its applications. Here we outline two types of challenges-some related to accessing and processing these data, and some related to data quality-and propose several research directions to address them moving forward.


Subject(s)
COVID-19 , Geographic Information Systems , SARS-CoV-2 , Humans , COVID-19/epidemiology , Pandemics
19.
Top Cogn Sci ; 16(2): 302-321, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37925669

ABSTRACT

As organizations gravitate to group-based structures, the problem of improving performance through judicious selection of group members has preoccupied scientists and managers alike. However, which individual attributes best predict group performance remains poorly understood. Here, we describe a preregistered experiment in which we simultaneously manipulated four widely studied attributes of group compositions: skill level, skill diversity, social perceptiveness, and cognitive style diversity. We find that while the average skill level of group members, skill diversity, and social perceptiveness are significant predictors of group performance, skill level dominates all other factors combined. Additionally, we explore the relationship between patterns of collaborative behavior and performance outcomes and find that any potential gains in solution quality from additional communication between the group members are outweighed by the overhead time cost, leading to lower overall efficiency. However, groups exhibiting more "turn-taking" behavior are considerably faster and thus more efficient. Finally, contrary to our expectation, we find that group compositional factors (i.e., skill level and social perceptiveness) are not associated with the amount of communication between group members nor turn-taking dynamics.


Subject(s)
Communication , Social Perception , Humans , Thinking
20.
Science ; 384(6699): eadk3451, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38815040

ABSTRACT

Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. To evaluate this claim, we introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). We estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users' exposure to flagged content was limited. In contrast, unflagged stories highlighting rare deaths after vaccination were among Facebook's most-viewed stories. Our work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods.


Subject(s)
COVID-19 Vaccines , Communication , Social Media , Vaccination Hesitancy , Humans , COVID-19/prevention & control , COVID-19 Vaccines/immunology , Crowdsourcing , Intention , Machine Learning , United States , Vaccination/psychology , Vaccination Hesitancy/psychology
SELECTION OF CITATIONS
SEARCH DETAIL