Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(11)2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38894276

RESUMO

Malicious social bots pose a serious threat to social network security by spreading false information and guiding bad opinions in social networks. The singularity and scarcity of single organization data and the high cost of labeling social bots have given rise to the construction of federated models that combine federated learning with social bot detection. In this paper, we first combine the federated learning framework with the Relational Graph Convolutional Neural Network (RGCN) model to achieve federated social bot detection. A class-level cross entropy loss function is applied in the local model training to mitigate the effects of the class imbalance problem in local data. To address the data heterogeneity issue from multiple participants, we optimize the classical federated learning algorithm by applying knowledge distillation methods. Specifically, we adjust the client-side and server-side models separately: training a global generator to generate pseudo-samples based on the local data distribution knowledge to correct the optimization direction of client-side classification models, and integrating client-side classification models' knowledge on the server side to guide the training of the global classification model. We conduct extensive experiments on widely used datasets, and the results demonstrate the effectiveness of our approach in social bot detection in heterogeneous data scenarios. Compared to baseline methods, our approach achieves a nearly 3-10% improvement in detection accuracy when the data heterogeneity is larger. Additionally, our method achieves the specified accuracy with minimal communication rounds.

2.
Behav Res Methods ; 56(6): 6258-6275, 2024 09.
Artigo em Inglês | MEDLINE | ID: mdl-38561551

RESUMO

The standard approach for detecting and preventing bots from doing harm online involves CAPTCHAs. However, recent AI research, including our own in this manuscript, suggests that bots can complete many common CAPTCHAs with ease. The most effective methodology for identifying potential bots involves completing image-processing, causal-reasoning based, free-response questions that are hand coded by human analysts. However, this approach is labor intensive, slow, and inefficient. Moreover, with the advent of Generative AI such as GPT and Bard, it may soon be obsolete. Here, we develop and test various automated, bot-screening questions, grounded in psychological research, to serve as a proactive screen against bots. Utilizing hand coded free-response questions in the naturalistic domain of MTurkers recruited for a Qualtrics survey, we identify 18.9% of our sample to be potential bots, whereas Google's reCAPTCHA V3 identified only 1.7% to be potential bots. We then look at the performance of these potential bots on our novel bot-screeners, each of which has different strengths and weaknesses but all of which outperform CAPTCHAs.


Assuntos
Inteligência Artificial , Humanos , Segurança Computacional
3.
Sci Rep ; 14(1): 6525, 2024 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-38499853

RESUMO

The rise of bots that mimic human behavior represents one of the most pressing threats to healthy information environments on social media. Many bots are designed to increase the visibility of low-quality content, spread misinformation, and artificially boost the reach of brands and politicians. These bots can also disrupt civic action coordination, such as by flooding a hashtag with spam and undermining political mobilization. Social media platforms have recognized these malicious bots' risks and implemented strict policies and protocols to block automated accounts. However, effective bot detection methods for Spanish are still in their early stages. Many studies and tools used for Spanish are based on English-language models and lack performance evaluations in Spanish. In response to this need, we have developed a method for detecting bots in Spanish called Botcheck. Botcheck was trained on a collection of Spanish-language accounts annotated in Twibot-20, a large-scale dataset featuring thousands of accounts annotated by humans in various languages. We evaluated Botcheck's performance on a large set of labeled accounts and found that it outperforms other competitive methods, including deep learning-based methods. As a case study, we used Botcheck to analyze the 2021 Chilean Presidential elections and discovered evidence of bot account intervention during the electoral term. In addition, we conducted an external validation of the accounts detected by Botcheck in the case study and found our method to be highly effective. We have also observed differences in behavior among the bots that are following the social media accounts of official presidential candidates.


Assuntos
Mídias Sociais , Humanos , Chile , Software , Comunicação , Política
4.
Front Big Data ; 6: 1221744, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37693848

RESUMO

Introduction: France has seen two key protests within the term of President Emmanuel Macron: one in 2020 against Islamophobia, and another in 2023 against the pension reform. During these protests, there is much chatter on online social media platforms like Twitter. Methods: In this study, we aim to analyze the differences between the online chatter of the 2 years through a network-centric view, and in particular the synchrony of users. This study begins by identifying groups of accounts that work together through two methods: temporal synchronicity and narrative similarity. We also apply a bot detection algorithm to identify bots within these networks and analyze the extent of inorganic synchronization within the discourse of these events. Results: Overall, our findings suggest that the synchrony of users in 2020 on Twitter is much higher than that of 2023, and there are more bot activity in 2020 compared to 2023.

5.
Front Big Data ; 6: 1343108, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38149222

RESUMO

[This corrects the article DOI: 10.3389/fdata.2023.1221744.].

6.
Math Biosci Eng ; 20(7): 13113-13132, 2023 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-37501480

RESUMO

Disinformation refers to false rumors deliberately fabricated for certain political or economic conspiracies. So far, how to prevent online disinformation propagation is still a severe challenge. Refutation, media censorship, and social bot detection are three popular approaches to stopping disinformation, which aim to clarify facts, intercept the spread of existing disinformation, and quarantine the source of disinformation, respectively. In this paper, we study the collaboration of the above three countermeasures in defending disinformation. Specifically, considering an online social network, we study the most cost-effective dynamic budget allocation (DBA) strategy for the three methods to minimize the proportion of disinformation-supportive accounts on the network with the lowest expenditure. For convenience, we refer to the search for the optimal DBA strategy as the DBA problem. Our contributions are as follows. First, we propose a disinformation propagation model to characterize the effects of different DBA strategies on curbing disinformation. On this basis, we establish a trade-off model for DBA strategies and reduce the DBA problem to an optimal control model. Second, we derive an optimality system for the optimal control model and develop a heuristic numerical algorithm called the DBA algorithm to solve the optimality system. With the DBA algorithm, we can find possible optimal DBA strategies. Third, through numerical experiments, we estimate key model parameters, examine the obtained DBA strategy, and verify the effectiveness of the DBA algorithm. Results show that the DBA algorithm is effective.

7.
MethodsX ; 11: 102430, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37867912

RESUMO

There has been a tremendous increase in the popularity of social media such as blogs, Instagram, twitter, online websites etc. The increasing utilization of these platforms have enabled the users to share information on a regular basis and also publicize social events. Nevertheless, most of the multimedia events are filled with social bots which raise concerns on the authenticity of the information shared in these events. With the increasing advancements of social bots, the complexity of detecting and fact-checking is also increasing. This is mainly due to the similarity between authorized users and social bots. Several researchers have introduced different models for detecting social bots and fact checking. However, these models suffer from various challenges. In most of the cases, these bots become indistinguishable from existing users and it is challenging to extract relevant attributes of the bots. In addition, it is also challenging to collect large scale data and label them for training the bot detection models. The performance of existing traditional classifiers used for bot detection processes is not satisfactory. This paper presents:•A machine learning based adaptive fuzzy neuro model integrated with a hist gradient boosting (HGB) classifier for identifying the persisting pattern of social bots for fake news detection.•And Harris Hawk optimization with Bi-LSTM for social bot prediction.•Results validate the efficacy of the HGB classifier which achieves a phenomenal accuracy of 95.64 % for twitter bot and 98.98 % for twitch bot dataset.

8.
J Comput Soc Sci ; 5(2): 1511-1528, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36035522

RESUMO

Social bots have become an important component of online social media. Deceptive bots, in particular, can manipulate online discussions of important issues ranging from elections to public health, threatening the constructive exchange of information. Their ubiquity makes them an interesting research subject and requires researchers to properly handle them when conducting studies using social media data. Therefore, it is important for researchers to gain access to bot detection tools that are reliable and easy to use. This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning. We introduce how Botometer works, the different ways users can access it, and present a case study as a demonstration. Readers can use the case study code as a template for their own research. We also discuss recommended practice for using Botometer.

9.
Soc Netw Anal Min ; 12(1): 4, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34804252

RESUMO

Nowadays, a massive number of people are involved in various social media. This fact enables organizations and institutions to more easily access their audiences across the globe. Some of them use social bots as an automatic entity to gain intangible access and influence on their users by faster content propagation. Thereby, malicious social bots are populating more and more to fool humans with their unrealistic behavior and content. Hence, that's necessary to distinguish these fake social accounts from real ones. Multiple approaches have been investigated in the literature to answer this problem. Statistical machine learning methods are one of them focusing on handcrafted features to represent characteristics of social bots. Although they reached successful results in some cases, they relied on the bot's behavior and failed in the behavioral change patterns of bots. On the other hands, more advanced deep neural network-based methods aim to overcome this limitation. Generative adversarial network (GAN) as new technology from this domain is a semi-supervised method that demonstrates to extract the behavioral pattern of the data. In this work, we use GAN to leak more information of bot samples for state-of-the-art textual bot detection method (Contextual LSTM). Although GAN augments low labeled data, original textual GAN (Sequence Generative Adversarial Net (SeqGAN)) has the known limitation of convergence. In this paper, we invested this limitation and customized the GAN idea in a new framework called GANBOT, in which the generator and classifier connect by an LSTM layer as a shared channel between them. Our experimental results on a bench-marked dataset of Twitter social bot show our proposed framework outperforms the existing contextual LSTM method by increasing bot detection probabilities.

10.
Soc Netw Anal Min ; 12(1): 43, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35309873

RESUMO

Use of online social networks (OSNs) undoubtedly brings the world closer. OSNs like Twitter provide a space for expressing one's opinions in a public platform. This great potential is misused by the creation of bot accounts, which spread fake news and manipulate opinions. Hence, distinguishing genuine human accounts from bot accounts has become a pressing issue for researchers. In this paper, we propose a framework based on deep learning to classify Twitter accounts as either 'human' or 'bot.' We use the information from user profile metadata of the Twitter account like description, follower count and tweet count. We name the framework 'DeeProBot,' which stands for Deep Profile-based Bot detection framework. The raw text from the description field of the Twitter account is also considered a feature for training the model by embedding the raw text using pre-trained Global Vectors (GLoVe) for word representation. Using only the user profile-based features considerably reduces the feature engineering overhead compared with that of user timeline-based features like user tweets and retweets. DeeProBot handles mixed types of features including numerical, binary, and text data, making the model hybrid. The network is designed with long short-term memory (LSTM) units and dense layers to accept and process the mixed input types. The proposed model is evaluated on a collection of publicly available labeled datasets. We have designed the model to make it generalizable across different datasets. The model is evaluated using two ways: testing on a hold-out set of the same dataset; and training with one dataset and testing with a different dataset. With these experiments, the proposed model achieved AUC as high as 0.97 with a selected set of features.

11.
Big Data ; 5(4): 310-324, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-29235918

RESUMO

Automated and semiautomated Twitter accounts, bots, have recently gained significant public attention due to their potential interference in the political realm. In this study, we develop a methodology for detecting bots on Twitter using an ensemble of classifiers and apply it to study bot activity within political discussions in the Russian Twittersphere. We focus on the interval from February 2014 to December 2015, an especially consequential period in Russian politics. Among accounts actively Tweeting about Russian politics, we find that on the majority of days, the proportion of Tweets produced by bots exceeds 50%. We reveal bot characteristics that distinguish them from humans in this corpus, and find that the software platform used for Tweeting is among the best predictors of bots. Finally, we find suggestive evidence that one prominent activity that bots were involved in on Russian political Twitter is the spread of news stories and promotion of media who produce them.


Assuntos
Política , Mídias Sociais , Pesquisa Empírica , Humanos , Federação Russa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA