Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-39124095

RESUMO

Wireless sensor networks (WSNs) are essential for a wide range of applications, including environmental monitoring and smart city developments, thanks to their ability to collect and transmit diverse physical and environmental data. The nature of WSNs, coupled with the variability and noise sensitivity of cost-effective sensors, presents significant challenges in achieving accurate data analysis and anomaly detection. To address these issues, this paper presents a new framework, called Online Adaptive Kalman Filtering (OAKF), specifically designed for real-time anomaly detection within WSNs. This framework stands out by dynamically adjusting the filtering parameters and anomaly detection threshold in response to live data, ensuring accurate and reliable anomaly identification amidst sensor noise and environmental changes. By highlighting computational efficiency and scalability, the OAKF framework is optimized for use in resource-constrained sensor nodes. Validation on different WSN dataset sizes confirmed its effectiveness, showing 95.4% accuracy in reducing false positives and negatives as well as achieving a processing time of 0.008 s per sample.

2.
Sensors (Basel) ; 22(14)2022 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-35891074

RESUMO

Unmanned Aerial Vehicles (UAVs) seem to be the most efficient way of achieving the intended aerial tasks, according to recent improvements. Various researchers from across the world have studied a variety of UAV formations and path planning methodologies. However, when unexpected obstacles arise during a collective flight, path planning might get complicated. The study needs to employ hybrid algorithms of bio-inspired computations to address path planning issues with more stability and speed. In this article, two hybrid models of Ant Colony Optimization were compared with respect to convergence time, i.e., the Max-Min Ant Colony Optimization approach in conjunction with the Differential Evolution and Cauchy mutation operators. Each algorithm was run on a UAV and traveled a predetermined path to evaluate its approach. In terms of the route taken and convergence time, the simulation results suggest that the MMACO-DE technique outperforms the MMACO-CM approach.


Assuntos
Algoritmos , Simulação por Computador
3.
Sensors (Basel) ; 22(7)2022 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-35408423

RESUMO

A vehicular ad hoc network (VANET) is an emerging technology that improves road safety, traffic efficiency, and passenger comfort. VANETs' applications rely on co-operativeness among vehicles by periodically sharing their context information, such as position speed and acceleration, among others, at a high rate due to high vehicles mobility. However, rogue nodes, which exploit the co-operativeness feature and share false messages, can disrupt the fundamental operations of any potential application and cause the loss of people's lives and properties. Unfortunately, most of the current solutions cannot effectively detect rogue nodes due to the continuous context change and the inconsideration of dynamic data uncertainty during the identification. Although there are few context-aware solutions proposed for VANET, most of these solutions are data-centric. A vehicle is considered malicious if it shares false or inaccurate messages. Such a rule is fuzzy and not consistently accurate due to the dynamic uncertainty of the vehicular context, which leads to a poor detection rate. To this end, this study proposed a fuzzy-based context-aware detection model to improve the overall detection performance. A fuzzy inference system is constructed to evaluate the vehicles based on their generated information. The output of the proposed fuzzy inference system is used to build a dynamic context reference based on the proposed fuzzy inference system. Vehicles are classified into either honest or rogue nodes based on the deviation of their evaluation scores calculated using the proposed fuzzy inference system from the context reference. Extensive experiments were carried out to evaluate the proposed model. Results show that the proposed model outperforms the state-of-the-art models. It achieves a 7.88% improvement in the overall performance, while a 16.46% improvement is attained for detection rate compared to the state-of-the-art model. The proposed model can be used to evict the rogue nodes, and thus improve the safety and traffic efficiency of crewed or uncrewed vehicles designed for different environments, land, naval, or air.

4.
Sensors (Basel) ; 22(10)2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-35632016

RESUMO

The Internet of Things (IoT) is a widely used technology in automated network systems across the world. The impact of the IoT on different industries has occurred in recent years. Many IoT nodes collect, store, and process personal data, which is an ideal target for attackers. Several researchers have worked on this problem and have presented many intrusion detection systems (IDSs). The existing system has difficulties in improving performance and identifying subcategories of cyberattacks. This paper proposes a deep-convolutional-neural-network (DCNN)-based IDS. A DCNN consists of two convolutional layers and three fully connected dense layers. The proposed model aims to improve performance and reduce computational power. Experiments were conducted utilizing the IoTID20 dataset. The performance analysis of the proposed model was carried out with several metrics, such as accuracy, precision, recall, and F1-score. A number of optimization techniques were applied to the proposed model in which Adam, AdaMax, and Nadam performance was optimum. In addition, the proposed model was compared with various advanced deep learning (DL) and traditional machine learning (ML) techniques. All experimental analysis indicates that the accuracy of the proposed approach is high and more robust than existing DL-based algorithms.


Assuntos
Internet das Coisas , Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação
5.
Sensors (Basel) ; 22(1)2021 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-35009725

RESUMO

Due to the wide availability and usage of connected devices in Internet of Things (IoT) networks, the number of attacks on these networks is continually increasing. A particularly serious and dangerous type of attack in the IoT environment is the botnet attack, where the attackers can control the IoT systems to generate enormous networks of "bot" devices for generating malicious activities. To detect this type of attack, several Intrusion Detection Systems (IDSs) have been proposed for IoT networks based on machine learning and deep learning methods. As the main characteristics of IoT systems include their limited battery power and processor capacity, maximizing the efficiency of intrusion detection systems for IoT networks is still a research challenge. It is important to provide efficient and effective methods that use lower computational time and have high detection rates. This paper proposes an aggregated mutual information-based feature selection approach with machine learning methods to enhance detection of IoT botnet attacks. In this study, the N-BaIoT benchmark dataset was used to detect botnet attack types using real traffic data gathered from nine commercial IoT devices. The dataset includes binary and multi-class classifications. The feature selection method incorporates Mutual Information (MI) technique, Principal Component Analysis (PCA) and ANOVA f-test at finely-granulated detection level to select the relevant features for improving the performance of IoT Botnet classifiers. In the classification step, several ensemble and individual classifiers were used, including Random Forest (RF), XGBoost (XGB), Gaussian Naïve Bayes (GNB), k-Nearest Neighbor (k-NN), Logistic Regression (LR) and Support Vector Machine (SVM). The experimental results showed the efficiency and effectiveness of the proposed approach, which outperformed other techniques using various evaluation metrics.


Assuntos
Internet das Coisas , Teorema de Bayes , Aprendizado de Máquina , Análise de Componente Principal , Máquina de Vetores de Suporte
6.
Sensors (Basel) ; 21(23)2021 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-34884022

RESUMO

Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, one-class learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with O(nd) memory utilization and no communication overhead.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Algoritmos , Análise de Componente Principal , Máquina de Vetores de Suporte
7.
Biomimetics (Basel) ; 8(6)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37887588

RESUMO

During the pandemic of the coronavirus disease (COVID-19), statistics showed that the number of affected cases differed from one country to another and also from one city to another. Therefore, in this paper, we provide an enhanced model for predicting COVID-19 samples in different regions of Saudi Arabia (high-altitude and sea-level areas). The model is developed using several stages and was successfully trained and tested using two datasets that were collected from Taif city (high-altitude area) and Jeddah city (sea-level area) in Saudi Arabia. Binary particle swarm optimization (BPSO) is used in this study for making feature selections using three different machine learning models, i.e., the random forest model, gradient boosting model, and naive Bayes model. A number of predicting evaluation metrics including accuracy, training score, testing score, F-measure, recall, precision, and receiver operating characteristic (ROC) curve were calculated to verify the performance of the three machine learning models on these datasets. The experimental results demonstrated that the gradient boosting model gives better results than the random forest and naive Bayes models with an accuracy of 94.6% using the Taif city dataset. For the dataset of Jeddah city, the results demonstrated that the random forest model outperforms the gradient boosting and naive Bayes models with an accuracy of 95.5%. The dataset of Jeddah city achieved better results than the dataset of Taif city in Saudi Arabia using the enhanced model for the term of accuracy.

8.
Comput Intell Neurosci ; 2021: 6089677, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34934420

RESUMO

The rapid emergence of the novel SARS-CoV-2 poses a challenge and has attracted worldwide attention. Artificial intelligence (AI) can be used to combat this pandemic and control the spread of the virus. In particular, deep learning-based time-series techniques are used to predict worldwide COVID-19 cases for short-term and medium-term dependencies using adaptive learning. This study aimed to predict daily COVID-19 cases and investigate the critical factors that increase the transmission rate of this outbreak by examining different influential factors. Furthermore, the study analyzed the effectiveness of COVID-19 prevention measures. A fully connected deep neural network, long short-term memory (LSTM), and transformer model were used as the AI models for the prediction of new COVID-19 cases. Initially, data preprocessing and feature extraction were performed using COVID-19 datasets from Saudi Arabia. The performance metrics for all models were computed, and the results were subjected to comparative analysis to detect the most reliable model. Additionally, statistical hypothesis analysis and correlation analysis were performed on the COVID-19 datasets by including features such as daily mobility, total cases, people fully vaccinated per hundred, weekly hospital admissions per million, intensive care unit patients, and new deaths per million. The results show that the LSTM algorithm had the highest accuracy of all the algorithms and an error of less than 2%. The findings of this study contribute to our understanding of COVID-19 containment. This study also provides insights into the prevention of future outbreaks.


Assuntos
COVID-19 , Algoritmos , Inteligência Artificial , Humanos , SARS-CoV-2 , Arábia Saudita/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA