RESUMEN
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.
Asunto(s)
Comunicación , Desinformación , Internet , Humanos , Algoritmos , Motivación , Medios de Comunicación SocialesRESUMEN
Many critics raise concerns about the prevalence of 'echo chambers' on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from 'like-minded' sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.
Asunto(s)
Actitud , Política , Medios de Comunicación Sociales , Adulto , Humanos , Emociones , Lenguaje , Estados Unidos , DesinformaciónRESUMEN
We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.
Asunto(s)
Política , Medios de Comunicación Sociales , Humanos , Estados Unidos , Actitud , Masculino , FemeninoRESUMEN
Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta's Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook's news ecosystem than those favored by liberals.
Asunto(s)
Política , Medios de Comunicación Sociales , Humanos , Comunicación , EcosistemaRESUMEN
We investigated the effects of Facebook's and Instagram's feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users' on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.
Asunto(s)
Medios de Comunicación Sociales , Humanos , Actitud , Política , Amigos , AlgoritmosRESUMEN
We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.
Asunto(s)
Política , Medios de Comunicación Sociales , Humanos , Actitud , Conocimiento , IncertidumbreRESUMEN
Background An organization's ability to identify and learn from opportunities for improvement (OFI) is key to increasing diagnostic safety. Many lack effective processes required to capitalize on these learning opportunities. We describe two parallel attempts at creating such a process and identifying generalizable lessons and learn from them. Methods Triggered case review programs were created independently at two organizations, Site 1 (Regions Hospital, HealthPartners, Saint Paul, MN, USA) and site 2 (University of California, San Diego). Both used a five-step process to create the review system and provide feedback: (1) identify trigger criteria; (2) establish a review panel; (3) develop a system to conduct reviews; (4) perform reviews; and (5) provide feedback. Results Site 1 identified 112 OFI in 184 case reviews (61%), with 66 (59%) provider OFI and 46 (41%) system OFI. Site 2 focused mainly on systems OFI identifying 105 OFI in 346 cases (30%). Opportunities at both sites were variable; common themes included test result management and communication across teams in peri-procedural care and with consultants. Of provider-initiated reviews, 67% of cases had an OFI at site 1 and 87% at site 2. Conclusions Lessons learned include the following: (1) peer review of cases provides opportunities to learn and calibrate diagnostic and management decisions at an organizational level; (2) sharing cases in review groups supports a culture of open discussion of OFIs; (3) reviews focused on diagnostic safety identify opportunities that may complement other organization-wide review opportunities.