RESUMO
This article details the Russian government's efforts to influence Canadians' perceptions of the war in Ukraine. Specifically, we examined Russian information campaigns tailored to Canadian audiences on X (formerly known as Twitter) and the supportive ecosystems of accounts that amplify those campaigns. By 2023, this ecosystem included at least 200,000 X accounts that have shared content with millions of Canadians. We identified ninety accounts with an outsized influence. The vast majority of the influential Canadian accounts were far right or far left in orientation. These networks were among Canada's most prolific and influential political communities online. We determined this by comparing these networks' potential influence to the online community engaging with Canada's 338 members of Parliament on X and a sample of twenty influential X accounts in Canada. The sophistication and proliferation of Canada-tailored narratives suggest a highly organized and well-funded effort to target Canadian support for Ukraine.
RESUMO
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
RESUMO
While emotional content predicts social media post sharing, competing theories of emotion imply different predictions about how emotional content will influence the virality of social media posts. We tested and compared these theoretical frameworks. Teams of annotators assessed more than 4000 multimedia posts from Polish and Lithuanian Facebook for more than 20 emotions. We found that, drawing on semantic space theory, modeling discrete emotions independently was superior to models examining valence (positive or negative), activation/arousal (high or low), or clusters of emotions and was on par with but had more explanatory power than a seven basic emotion model. Certain discrete emotions were associated with post sharing, including both positive and negative and relatively lower and higher activation/arousal emotions (e.g., amusement, cute/kama muta, anger, and sadness) even when controlling for number of followers, time up, topic, and Facebook angry reactions. These results provide key insights into better understanding of social media post virality.
Assuntos
Emoções , Mídias Sociais , Humanos , Ira , Nível de AlertaRESUMO
We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.