Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 21789, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39294195

RESUMO

The emerging expanding scope of the Internet of Things (IoT) necessitates robust intrusion detection systems (IDS) to mitigate security risks effectively. However, existing approaches often struggle with adaptability to emerging threats and fail to account for IoT-specific complexities. To address these challenges, this study proposes a novel approach by hybridizing convolutional neural network (CNN) and gated recurrent unit (GRU) architectures tailored for IoT intrusion detection. This hybrid model excels in capturing intricate features and learning relational aspects crucial in IoT security. Moreover, we integrate the feature-weighted synthetic minority oversampling technique (FW-SMOTE) to handle imbalanced datasets, which commonly afflict intrusion detection tasks. Validation using the IoTID20 dataset, designed to emulate IoT environments, yields exceptional results with 99.60% accuracy in attack detection, surpassing existing benchmarks. Additionally, evaluation on the network domain dataset, UNSW-NB15, demonstrates robust performance with 99.16% accuracy, highlighting the model's applicability across diverse datasets. This innovative approach not only addresses current limitations in IoT intrusion detection but also establishes new benchmarks in terms of accuracy and adaptability. The findings underscore its potential as a versatile and effective solution for safeguarding IoT ecosystems against evolving security threats.

2.
Data Brief ; 54: 110439, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38756930

RESUMO

In the Islamic domain, Hadiths hold significant importance, standing as crucial texts following the Holy Quran. Each Hadith contains three main parts: the ISNAD (chain of narrators), TARAF (starting part, often from Prophet Muhammad), and MATN (Hadith content). ISNAD, a chain of narrators involved in transmitting that particular MATN. Hadith scholars determine the trustworthiness of the transmitted MATN by the quality of the ISNAD. The ISNAD's data is available in its original Arabic language, with narrator names transliterated into English. This paper presents the Multi-IsnadSet (MIS), that has great potential to be employed by the social scientist and theologist. A multi-directed graph structure is used to represents the complex interactions among the narrators of Hadith. The MIS dataset represent directed graph which consists of 2092 nodes, representing individual narrators, and 77,797 edges represent the Sanad-Hadith connections. The MIS dataset represents multiple ISNAD of the Hadith based on the Sahih Muslim Hadith book. The dataset was carefully extracted from online multiple Hadith sources using data scraping and web crawling techniques tools, providing extensive Hadith details. Each dataset entry provides a complete view of a specific Hadith, including the original book, Hadith number, textual content (MATN), list of narrators, narrator count, sequence of narrators, and ISNAD count. In this paper, four different tools were designed and constructed for modeling and analyzing narrative network such as python library (NetworkX), powerful graph database Neo4j and two different network analysis tools named Gephi and CytoScape. The Neo4j graph database is used to represent the multi-dimensional graph related data for the ease of extraction and establishing new relationships among nodes. Researchers can use MIS to explore Hadith credibility including classification of Hadiths (Sahih=perfection in the Sanad/Dhaif=imperfection in the Sanad), and narrators (trustworthy/not). Traditionally, scholars have focused on identifying the longest and shortest Sanad between two Narrators, but in MIS, the emphasis shifts to determining the optimum/authentic Sanad, considering narrator qualities. The graph representation of the authentic and manually curated dataset will open ways for the development of computational models that could identify the significance of a chain and a narrator. The dataset allows the researchers to provide Hadith narrators and Hadith ISNAD that could be used in a wide variety of future research studies related to Hadith authentication and rules extraction. Moreover, the dataset encourages cross-disciplinary research, bridging the gap between Islamic studies, artificial intelligence (AI), social network analysis (SNA), and Graph Neural Network (GNN).

3.
PeerJ Comput Sci ; 9: e1656, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077568

RESUMO

Background: Software process improvement (SPI) is an indispensable phenomenon in the evolution of a software development company that adopts global software development (GSD) or in-house development. Several software development companies do not only adhere to in-house development but also go for the GSD paradigm. Both development approaches are of paramount significance because of their respective advantages. Many studies have been conducted to find the SPI success factors in the case of companies that opt for in-house development. Still, less attention has been paid to the SPI success factors in the case of the GSD environment for large-scale software companies. Factors that contribute to the SPI success of small as well as medium-sized companies have been identified, but large-scale companies have still been overlooked. The research aims to identify the success factors of SPI for both development approaches (GSD and in-house) in the case of large-scale software companies. Methods: Two systematic literature reviews have been performed. An industrial survey has been conducted to detect additional SPI success factors for both development environments. In the subsequent step, a comparison has been made to find similar SPI success factors in both development environments. Lastly, another industrial survey is conducted to compare the common SPI success factors of GSD and in-house software development, in the case of large-scale companies, to divulge which SPI success factor carries more value in which development environment. For this reason, parametric (Pearson correlation) and non-parametric (Kendall's Tau correlation and the Spearman correlation) tests have been performed. Results: The 17 common SPI factors have been identified. The pinpointed common success factors expedite and contribute to SPI in both environments in the case of large-scale companies.

4.
Diagnostics (Basel) ; 13(17)2023 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-37685310

RESUMO

Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. The objective of the work is to identify nine different types of chest diseases, including COVID-19, edema (EDE), LC, PNEU, pneumothorax (PNEUTH), normal, atelectasis (ATE), and consolidation lung (COL). Therefore, we designed a novel deep learning (DL)-based chest disease detection network (DCDD_Net) that uses a CXR, CT scans, and cough sound images for the identification of nine different types of chest diseases. The scalogram method is used to convert the cough sounds into an image. Before training the proposed DCDD_Net model, the borderline (BL) SMOTE is applied to balance the CXR, CT scans, and cough sound images of nine chest diseases. The proposed DCDD_Net model is trained and evaluated on 20 publicly available benchmark chest disease datasets of CXR, CT scan, and cough sound images. The classification performance of the DCDD_Net is compared with four baseline models, i.e., InceptionResNet-V2, EfficientNet-B0, DenseNet-201, and Xception, as well as state-of-the-art (SOTA) classifiers. The DCDD_Net achieved an accuracy of 96.67%, a precision of 96.82%, a recall of 95.76%, an F1-score of 95.61%, and an area under the curve (AUC) of 99.43%. The results reveal that DCDD_Net outperformed the other four baseline models in terms of many performance evaluation metrics. Thus, the proposed DCDD_Net model can provide significant assistance to radiologists and medical experts. Additionally, the proposed model was also shown to be resilient by statistical evaluations of the datasets using McNemar and ANOVA tests.

5.
Sensors (Basel) ; 23(6)2023 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-36991755

RESUMO

The exponentially growing concern of cyber-attacks on extremely dense underwater sensor networks (UWSNs) and the evolution of UWSNs digital threat landscape has brought novel research challenges and issues. Primarily, varied protocol evaluation under advanced persistent threats is now becoming indispensable yet very challenging. This research implements an active attack in the Adaptive Mobility of Courier Nodes in Threshold-optimized Depth-based Routing (AMCTD) protocol. A variety of attacker nodes were employed in diverse scenarios to thoroughly assess the performance of AMCTD protocol. The protocol was exhaustively evaluated both with and without active attacks with benchmark evaluation metrics such as end-to-end delay, throughput, transmission loss, number of active nodes and energy tax. The preliminary research findings show that active attack drastically lowers the AMCTD protocol's performance (i.e., active attack reduces the number of active nodes by up to 10%, reduces throughput by up to 6%, increases transmission loss by 7%, raises energy tax by 25%, and increases end-to-end delay by 20%).

6.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36236583

RESUMO

Automatic modulation recognition (AMR) is used in various domains-from general-purpose communication to many military applications-thanks to the growing popularity of the Internet of Things (IoT) and related communication technologies. In this research article, we propose an innovative idea of combining the classical mathematical technique of computing linear combinations (LCs) of cumulants with a genetic algorithm (GA) to create super-cumulants. These super-cumulants are further used to classify five digital modulation schemes on fading channels using the K-nearest neighbor (KNN). Our proposed classifier significantly improves the percentage recognition accuracy at lower SNRs when using smaller sample sizes. A comparison with existing techniques manifests the supremacy of our proposed classifier.


Assuntos
Algoritmos , Análise por Conglomerados , Matemática
7.
Sensors (Basel) ; 22(10)2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-35632055

RESUMO

Like smart phones, the recent years have seen an increased usage of internet of things (IoT) technology. IoT devices, being resource constrained due to smaller size, are vulnerable to various security threats. Recently, many distributed denial of service (DDoS) attacks generated with the help of IoT botnets affected the services of many websites. The destructive botnets need to be detected at the early stage of infection. Machine-learning models can be utilized for early detection of botnets. This paper proposes one-class classifier-based machine-learning solution for the detection of IoT botnets in a heterogeneous environment. The proposed one-class classifier, which is based on one-class KNN, can detect the IoT botnets at the early stage with high accuracy. The proposed machine-learning-based model is a lightweight solution that works by selecting the best features leveraging well-known filter and wrapper methods for feature selection. The proposed strategy is evaluated over different datasets collected from varying network scenarios. The experimental results reveal that the proposed technique shows improved performance, consistent across three different datasets used for evaluation.


Assuntos
Internet das Coisas , Internet , Aprendizado de Máquina
8.
Sensors (Basel) ; 21(22)2021 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-34833507

RESUMO

Effective communication in vehicular networks depends on the scheduling of wireless channel resources. There are two types of channel resource scheduling in Release 14 of the 3GPP, i.e., (1) controlled by eNodeB and (2) a distributed scheduling carried out by every vehicle, known as Autonomous Resource Selection (ARS). The most suitable resource scheduling for vehicle safety applications is the ARS mechanism. ARS includes (a) counter selection (i.e., specifying the number of subsequent transmissions) and (b) resource reselection (specifying the reuse of the same resource after counter expiry). ARS is a decentralized approach for resource selection. Therefore, resource collisions can occur during the initial selection, where multiple vehicles might select the same resource, hence resulting in packet loss. ARS is not adaptive towards vehicle density and employs a uniform random selection probability approach for counter selection and reselection. As a result, it can prevent some vehicles from transmitting in a congested vehicular network. To this end, the paper presents Truly Autonomous Resource Selection (TARS) for vehicular networks. TARS considers resource allocation as a problem of locally detecting the selected resources at neighbor vehicles to avoid resource collisions. The paper also models the behavior of counter selection and resource block reselection on resource collisions using the Discrete Time Markov Chain (DTMC). Observation of the model is used to propose a fair policy of counter selection and resource reselection in ARS. The simulation of the proposed TARS mechanism showed better performance in terms of resource collision probability and the packet delivery ratio when compared with the LTE Mode 4 standard and with a competing approach proposed by Jianhua He et al.


Assuntos
Simulação por Computador
9.
Artigo em Inglês | MEDLINE | ID: mdl-34639652

RESUMO

Traditional taxi services have now been transformed into e-hailing applications (EHA) such as Uber, Careem, Hailo, and Grab Car globally due to the proliferation of smartphone technology. On the one hand, these applications provide transport facilities. On the other hand, users are facing multiple issues in the adoption of EHAs. Despite problems, EHAs are still widely adopted globally. However, a sparse amount of research has been conducted related to EHAs, particular in regards to exploring the significant factors of intention behind using EHAs Therefore, there is a need to identify influencing factors that have a great impact on the adoption and acceptance of these applications. Hence, this research aims to present an empirical study on the factors influencing customers' intentions towards EHAs. The Technology Acceptance Model (TAM) was extended with four external factors: perceived mobility value, effort expectancy, perceived locational accuracy, and perceived price. A questionnaire was developed for the measurement of these factors. A survey was conducted with 211 users of EHAs to collect data. Structural equation modeling (SEM) was used to analyze the collected data. The results of this study exposed that perceived usefulness, perceived price, and perceived ease of use affect behavior intention to use EHAs. Furthermore, perceived ease of use was impacted by effort expectancy, perceived locational accuracy, and perceived mobility. The findings of the study provide a foundation to develop new guidelines for such applications that will be beneficial for developers and designers of these applications.


Assuntos
Intenção , Smartphone , Análise de Classes Latentes , Inquéritos e Questionários , Tecnologia
10.
PLoS One ; 15(4): e0229785, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32271783

RESUMO

Software development outsourcing is becoming more and more famous because of the advantages like cost abatement, process enhancement, and coping with the scarcity of needed resources. Studies confirm that unfortunately a large proportion of the software development outsourcing projects fails to realize anticipated benefits. Investigations into the failures of such projects divulge that in several cases software development outsourcing projects are failed because of the issues that are associated with requirements engineering process. The objective of this study is the identification and the ranking of the commonly occurring issues of the requirements engineering process in the case of software development outsourcing. For this purpose, contemporary literature has been assessed rigorously, issues faced by practitioners have been identified and three questionnaire surveys have been organized by involving experienced software development outsourcing practitioners. The Delphi technique, cut-off value method and 50% rule have also been employed. The study explores 150 issues (129 issues from literature and 21 from industry) of requirements engineering process for software development outsourcing, groups the 150 issues into 7 identified categories and then extricates 43 customarily or commonly arising issues from the 150 issues. Founded on 'frequency of occurrence' the 43 customarily arising issues have been ranked with respect to respective categories (category-wise ranking) and with respect to all the categories (overall ranking). Categories of the customarily arising issues have also been ranked. The issues' identification and ranking contribute to design proactive software project management plan for dealing with software development outsourcing failures and attaining conjectured benefits of the software development outsourcing.


Assuntos
Engenharia , Serviços Terceirizados , Software , Conhecimento , Participação dos Interessados , Inquéritos e Questionários
11.
PLoS One ; 13(5): e0196957, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29715321

RESUMO

[This corrects the article DOI: 10.1371/journal.pone.0179703.].

12.
PLoS One ; 13(1): e0179703, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29351287

RESUMO

Designing an efficient association rule mining (ARM) algorithm for multilevel knowledge-based transactional databases that is appropriate for real-world deployments is of paramount concern. However, dynamic decision making that needs to modify the threshold either to minimize or maximize the output knowledge certainly necessitates the extant state-of-the-art algorithms to rescan the entire database. Subsequently, the process incurs heavy computation cost and is not feasible for real-time applications. The paper addresses efficiently the problem of threshold dynamic updation for a given purpose. The paper contributes by presenting a novel ARM approach that creates an intermediate itemset and applies a threshold to extract categorical frequent itemsets with diverse threshold values. Thus, improving the overall efficiency as we no longer needs to scan the whole database. After the entire itemset is built, we are able to obtain real support without the need of rebuilding the itemset (e.g. Itemset list is intersected to obtain the actual support). Moreover, the algorithm supports to extract many frequent itemsets according to a pre-determined minimum support with an independent purpose. Additionally, the experimental results of our proposed approach demonstrate the capability to be deployed in any mining system in a fully parallel mode; consequently, increasing the efficiency of the real-time association rules discovery process. The proposed approach outperforms the extant state-of-the-art and shows promising results that reduce computation cost, increase accuracy, and produce all possible itemsets.


Assuntos
Mineração de Dados/métodos , Conjuntos de Dados como Assunto , Algoritmos , Bases de Dados Factuais
13.
PLoS One ; 12(4): e0174715, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28384312

RESUMO

Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.


Assuntos
Redes de Comunicação de Computadores , Software , Algoritmos , Análise por Conglomerados , Internet , Reprodutibilidade dos Testes
14.
PLoS One ; 11(9): e0161340, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27658194

RESUMO

A wireless sensor network (WSN) comprises small sensor nodes with limited energy capabilities. The power constraints of WSNs necessitate efficient energy utilization to extend the overall network lifetime of these networks. We propose a distance-based and low-energy adaptive clustering (DISCPLN) protocol to streamline the green issue of efficient energy utilization in WSNs. We also enhance our proposed protocol into the multi-hop-DISCPLN protocol to increase the lifetime of the network in terms of high throughput with minimum delay time and packet loss. We also propose the mobile-DISCPLN protocol to maintain the stability of the network. The modelling and comparison of these protocols with their corresponding benchmarks exhibit promising results.

15.
ScientificWorldJournal ; 2014: 269357, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25121114

RESUMO

Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.


Assuntos
Algoritmos , Segurança Computacional , Gestão da Informação/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Teóricos , Projetos de Pesquisa , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA