RESUMEN
Are members of marginalized communities silenced on social media when they share personal experiences of racism? Here, we investigate the role of algorithms, humans, and platform guidelines in suppressing disclosures of racial discrimination. In a field study of actual posts from a neighborhood-based social media platform, we find that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as toxic by five widely used moderation algorithms from major online platforms, including the most recent large language models. We show that human users disproportionately flag these disclosures for removal as well. Next, in a follow-up experiment, we demonstrate that merely witnessing such suppression negatively influences how Black Americans view the community and their place in it. Finally, to address these challenges to equity and inclusion in online spaces, we introduce a mitigation strategy: a guideline-reframing intervention that is effective at reducing silencing behavior across the political spectrum.
Asunto(s)
Racismo , Medios de Comunicación Sociales , Humanos , Negro o Afroamericano , AlgoritmosRESUMEN
Sharing experiences with racism (racial discrimination disclosure) has the power to raise awareness of discrimination and spur meaningful conversations about race. Sharing these experiences with racism on social media may prompt a range of responses among users. While previous work investigates how disclosure impacts disclosers and listeners, we extend this research to explore the impact of observing discussions about racial discrimination online-what we call vicarious race talk. In a series of experiments using real social media posts, we show that the initial response to racial discrimination disclosure-whether the response denies or validates the poster's perspective-influences observers' own perceptions and attitudes. Despite observers identifying denial as less supportive than validation, those who observed a denial response showed less responsive attitudes toward the poster/target (Studies 1-3) and less support for discussions about discrimination on social media in general (Studies 2-3). Exploratory findings revealed that those who viewed denial comments also judged the transgressor as less racist, and expressed less support and more denial in their own comments. This suggests that even as observers negatively judge denial, their perceptions of the poster are nonetheless negatively influenced, and this impact extends to devaluing the topic of discrimination broadly. We highlight the context of social media, where racial discrimination disclosure-and how people respond to it-may be particularly consequential.