Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-36850714

RESUMO

Video streaming-based real-time vehicle identification and license plate recognition systems are challenging to design and deploy in terms of real-time processing on edge, dealing with low image resolution, high noise, and identification. This paper addresses these issues by introducing a novel multi-stage, real-time, deep learning-based vehicle identification and license plate recognition system. The system is based on a set of algorithms that efficiently integrate two object detectors, an image classifier, and a multi-object tracker to recognize car models and license plates. The information redundancy of Saudi license plates' Arabic and English characters is leveraged to boost the license plate recognition accuracy while satisfying real-time inference performance. The system optimally achieves real-time performance on edge GPU devices and maximizes models' accuracy by taking advantage of the temporally redundant information of the video stream's frames. The edge device sends a notification of the detected vehicle and its license plate only once to the cloud after completing the processing. The system was experimentally evaluated on vehicles and license plates in real-world unconstrained environments at several parking entrance gates. It achieves 17.1 FPS on a Jetson Xavier AGX edge device with no delay. The comparison between the accuracy on the videos and on static images extracted from them shows that the processing of video streams using this proposed system enhances the relative accuracy of the car model and license plate recognition by 13% and 40%, respectively. This research work has won two awards in 2021 and 2022.

2.
Sensors (Basel) ; 23(10)2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37430585

RESUMO

Having access to safe water and using it properly is crucial for human well-being, sustainable development, and environmental conservation. Nonetheless, the increasing disparity between human demands and natural freshwater resources is causing water scarcity, negatively impacting agricultural and industrial efficiency, and giving rise to numerous social and economic issues. Understanding and managing the causes of water scarcity and water quality degradation are essential steps toward more sustainable water management and use. In this context, continuous Internet of Things (IoT)-based water measurements are becoming increasingly crucial in environmental monitoring. However, these measurements are plagued by uncertainty issues that, if not handled correctly, can introduce bias and inaccuracy into our analysis, decision-making processes, and results. To cope with uncertainty issues related to sensed water data, we propose combining network representation learning with uncertainty handling methods to ensure rigorous and efficient modeling management of water resources. The proposed approach involves accounting for uncertainties in the water information system by leveraging probabilistic techniques and network representation learning. It creates a probabilistic embedding of the network, enabling the classification of uncertain representations of water information entities, and applies evidence theory to enable decision making that is aware of uncertainties, ultimately choosing appropriate management strategies for affected water areas.

3.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36904630

RESUMO

In applications of the Internet of Things (IoT), where many devices are connected for a specific purpose, data is continuously collected, communicated, processed, and stored between the nodes. However, all connected nodes have strict constraints, such as battery usage, communication throughput, processing power, processing business, and storage limitations. The high number of constraints and nodes makes the standard methods to regulate them useless. Hence, using machine learning approaches to manage them better is attractive. In this study, a new framework for data management of IoT applications is designed and implemented. The framework is called MLADCF (Machine Learning Analytics-based Data Classification Framework). It is a two-stage framework that combines a regression model and a Hybrid Resource Constrained KNN (HRCKNN). It learns from the analytics of real scenarios of the IoT application. The description of the Framework parameters, the training procedure, and the application in real scenarios are detailed. MLADCF has shown proven efficiency by testing on four different datasets compared to existing approaches. Moreover, it reduced the global energy consumption of the network, leading to an extended battery life of the connected nodes.

4.
Sensors (Basel) ; 23(19)2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37837162

RESUMO

The comparison of low-rank-based learning models for multi-label categorization of attacks for intrusion detection datasets is presented in this work. In particular, we investigate the performance of three low-rank-based machine learning (LR-SVM) and deep learning models (LR-CNN), (LR-CNN-MLP) for classifying intrusion detection data: Low Rank Representation (LRR) and Non-negative Low Rank Representation (NLR). We also look into how these models' performance is affected by hyperparameter tweaking by using Guassian Bayes Optimization. The tests has been run on merging two intrusion detection datasets that are available to the public such as BoT-IoT and UNSW- NB15 and assess the models' performance in terms of key evaluation criteria, including precision, recall, F1 score, and accuracy. Nevertheless, all three models perform noticeably better after hyperparameter modification. The selection of low-rank-based learning models and the significance of the hyperparameter tuning log for multi-label classification of intrusion detection data have been discussed in this work. A hybrid security dataset is used with low rank factorization in addition to SVM, CNN and CNN-MLP. The desired multilabel results have been obtained by considering binary and multi-class attack classification as well. Low rank CNN-MLP achieved suitable results in multilabel classification of attacks. Also, a Gaussian-based Bayesian optimization algorithm is used with CNN-MLP for hyperparametric tuning and the desired results have been achieved using c and γ for SVM and α and ß for CNN and CNN-MLP on a hybrid dataset. The results show the label UDP is shared among analysis, DoS and shellcode. The accuracy of classifying UDP among three classes is 98.54%.

5.
Sensors (Basel) ; 23(21)2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37960536

RESUMO

Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) have emerged as transforming technologies, bringing the potential to revolutionize a wide range of industries such as environmental monitoring, agriculture, manufacturing, smart health, home automation, wildlife monitoring, and surveillance. Population expansion, changes in the climate, and resource constraints all offer problems to modern IoT applications. To solve these issues, the integration of Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) has come forth as a game-changing solution. For example, in agricultural environment, IoT-based WSN has been utilized to monitor yield conditions and automate agriculture precision through different sensors. These sensors are used in agriculture environments to boost productivity through intelligent agricultural decisions and to collect data on crop health, soil moisture, temperature monitoring, and irrigation. However, sensors have finite and non-rechargeable batteries, and memory capabilities, which might have a negative impact on network performance. When a network is distributed over a vast area, the performance of WSN-assisted IoT suffers. As a result, building a stable and energy-efficient routing infrastructure is quite challenging in order to extend network lifetime. To address energy-related issues in scalable WSN-IoT environments for future IoT applications, this research proposes EEDC: An Energy Efficient Data Communication scheme by utilizing "Region based Hierarchical Clustering for Efficient Routing (RHCER)"-a multi-tier clustering framework for energy-aware routing decisions. The sensors deployed for IoT application data collection acquire important data and select cluster heads based on a multi-criteria decision function. Further, to ensure efficient long-distance communication along with even load distribution across all network nodes, a subdivision technique was employed in each tier of the proposed framework. The proposed routing protocol aims to provide network load balancing and convert communicating over long distances into shortened multi-hop distance communications, hence enhancing network lifetime.The performance of EEDC is compared to that of some existing energy-efficient protocols for various parameters. The simulation results show that the suggested methodology reduces energy usage by almost 31% in sensor nodes and provides almost 38% improved packet drop ratio.

6.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37896456

RESUMO

Intrusion detection systems, also known as IDSs, are widely regarded as one of the most essential components of an organization's network security. This is because IDSs serve as the organization's first line of defense against several cyberattacks and are accountable for accurately detecting any possible network intrusions. Several implementations of IDSs accomplish the detection of potential threats throughout flow-based network traffic analysis. Traditional IDSs frequently struggle to provide accurate real-time intrusion detection while keeping up with the changing landscape of threat. Innovative methods used to improve IDSs' performance in network traffic analysis are urgently needed to overcome these drawbacks. In this study, we introduced a model called a deep neural decision forest (DNDF), which allows the enhancement of classification trees with the power of deep networks to learn data representations. We essentially utilized the CICIDS 2017 dataset for network traffic analysis and extended our experiments to evaluate the DNDF model's performance on two additional datasets: CICIDS 2018 and a custom network traffic dataset. Our findings showed that DNDF, a combination of deep neural networks and decision forests, outperformed reference approaches with a remarkable precision of 99.96% by using the CICIDS 2017 dataset while creating latent representations in deep layers. This success can be attributed to improved feature representation, model optimization, and resilience to noisy and unbalanced input data, emphasizing DNDF's capabilities in intrusion detection and network security solutions.

7.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36904589

RESUMO

The Vision Transformer (ViT) architecture has been remarkably successful in image restoration. For a while, Convolutional Neural Networks (CNN) predominated in most computer vision tasks. Now, both CNN and ViT are efficient approaches that demonstrate powerful capabilities to restore a better version of an image given in a low-quality format. In this study, the efficiency of ViT in image restoration is studied extensively. The ViT architectures are classified for every task of image restoration. Seven image restoration tasks are considered: Image Super-Resolution, Image Denoising, General Image Enhancement, JPEG Compression Artifact Reduction, Image Deblurring, Removing Adverse Weather Conditions, and Image Dehazing. The outcomes, the advantages, the limitations, and the possible areas for future research are detailed. Overall, it is noted that incorporating ViT in the new architectures for image restoration is becoming a rule. This is due to some advantages compared to CNN, such as better efficiency, especially when more data are fed to the network, robustness in feature extraction, and a better feature learning approach that sees better the variances and characteristics of the input. Nevertheless, some drawbacks exist, such as the need for more data to show the benefits of ViT over CNN, the increased computational cost due to the complexity of the self-attention block, a more challenging training process, and the lack of interpretability. These drawbacks represent the future research direction that should be targeted to increase the efficiency of ViT in the image restoration domain.

8.
Sensors (Basel) ; 22(4)2022 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-35214554

RESUMO

Information fusion in automated vehicle for various datatypes emanating from many resources is the foundation for making choices in intelligent transportation autonomous cars. To facilitate data sharing, a variety of communication methods have been integrated to build a diverse V2X infrastructure. However, information fusion security frameworks are currently intended for specific application instances, that are insufficient to fulfill the overall requirements of Mutual Intelligent Transportation Systems (MITS). In this work, a data fusion security infrastructure has been developed with varying degrees of trust. Furthermore, in the V2X heterogeneous networks, this paper offers an efficient and effective information fusion security mechanism for multiple sources and multiple type data sharing. An area-based PKI architecture with speed provided by a Graphic Processing Unit (GPU) is given in especially for artificial neural synchronization-based quick group key exchange. A parametric test is performed to ensure that the proposed data fusion trust solution meets the stringent delay requirements of V2X systems. The efficiency of the suggested method is tested, and the results show that it surpasses similar strategies already in use.


Assuntos
Veículos Autônomos , Segurança Computacional , Automóveis , Meios de Transporte
9.
Sensors (Basel) ; 22(3)2022 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-35161594

RESUMO

For many years, mental health has been hidden behind a veil of shame and prejudice. In 2017, studies claimed that 10.7% of the global population suffered from mental health disorders. Recently, people started seeking relaxing treatment through technology, which enhanced and expanded mental health care, especially during the COVID-19 pandemic, where the use of mental health forums, websites, and applications has increased by 95%. However, these solutions still have many limits, as existing mental health technologies are not meant for everyone. In this work, an up-to-date literature review on state-of-the-art of mental health and healthcare solutions is provided. Then, we focus on Arab-speaking patients and propose an intelligent tool for mental health intent recognition. The proposed system uses the concepts of intent recognition to make mental health diagnoses based on a bidirectional encoder representations from transformers (BERT) model and the International Neuropsychiatric Interview (MINI). Experiments are conducted using a dataset collected at the Military Hospital of Tunis in Tunisia. Results show excellent performance of the proposed system (the accuracy is over 92%, the precision, recall, and F1 scores are over 94%) in mental health patient diagnosis for five aspects (depression, suicidality, panic disorder, social phobia, and adjustment disorder). In addition, the tool was tested and evaluated by medical staff at the Military Hospital of Tunis, who found it very interesting to help decision-making and prioritizing patient appointment scheduling, especially with a high number of treated patients every day.


Assuntos
COVID-19 , Saúde Mental , Humanos , Pandemias , Escalas de Graduação Psiquiátrica , SARS-CoV-2
10.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36236573

RESUMO

Academics and the health community are paying much attention to developing smart remote patient monitoring, sensors, and healthcare technology. For the analysis of medical scans, various studies integrate sophisticated deep learning strategies. A smart monitoring system is needed as a proactive diagnostic solution that may be employed in an epidemiological scenario such as COVID-19. Consequently, this work offers an intelligent medicare system that is an IoT-empowered, deep learning-based decision support system (DSS) for the automated detection and categorization of infectious diseases (COVID-19 and pneumothorax). The proposed DSS system was evaluated using three independent standard-based chest X-ray scans. The suggested DSS predictor has been used to identify and classify areas on whole X-ray scans with abnormalities thought to be attributable to COVID-19, reaching an identification and classification accuracy rate of 89.58% for normal images and 89.13% for COVID-19 and pneumothorax. With the suggested DSS system, a judgment depending on individual chest X-ray scans may be made in approximately 0.01 s. As a result, the DSS system described in this study can forecast at a pace of 95 frames per second (FPS) for both models, which is near to real-time.


Assuntos
COVID-19 , Pneumotórax , Idoso , COVID-19/diagnóstico por imagem , Teste para COVID-19 , Humanos , Pulmão , Medicare , Estados Unidos , Raios X
11.
Entropy (Basel) ; 24(4)2022 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-35455196

RESUMO

Pristine and trustworthy data are required for efficient computer modelling for medical decision-making, yet data in medical care is frequently missing. As a result, missing values may occur not just in training data but also in testing data that might contain a single undiagnosed episode or a participant. This study evaluates different imputation and regression procedures identified based on regressor performance and computational expense to fix the issues of missing values in both training and testing datasets. In the context of healthcare, several procedures are introduced for dealing with missing values. However, there is still a discussion concerning which imputation strategies are better in specific cases. This research proposes an ensemble imputation model that is educated to use a combination of simple mean imputation, k-nearest neighbour imputation, and iterative imputation methods, and then leverages them in a manner where the ideal imputation strategy is opted among them based on attribute correlations on missing value features. We introduce a unique Ensemble Strategy for Missing Value to analyse healthcare data with considerable missing values to identify unbiased and accurate prediction statistical modelling. The performance metrics have been generated using the eXtreme gradient boosting regressor, random forest regressor, and support vector regressor. The current study uses real-world healthcare data to conduct experiments and simulations of data with varying feature-wise missing frequencies indicating that the proposed technique surpasses standard missing value imputation approaches as well as the approach of dropping records holding missing values in terms of accuracy.

12.
Entropy (Basel) ; 24(6)2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35741521

RESUMO

A brain tumour is one of the major reasons for death in humans, and it is the tenth most common type of tumour that affects people of all ages. However, if detected early, it is one of the most treatable types of tumours. Brain tumours are classified using biopsy, which is not usually performed before definitive brain surgery. An image classification technique for tumour diseases is important for accelerating the treatment process and avoiding surgery and errors from manual diagnosis by radiologists. The advancement of technology and machine learning (ML) can assist radiologists in tumour diagnostics using magnetic resonance imaging (MRI) images without invasive procedures. This work introduced a new hybrid CNN-based architecture to classify three brain tumour types through MRI images. The method suggested in this paper uses hybrid deep learning classification based on CNN with two methods. The first method combines a pre-trained Google-Net model of the CNN algorithm for feature extraction with SVM for pattern classification. The second method integrates a finely tuned Google-Net with a soft-max classifier. The proposed approach was evaluated using MRI brain images that contain a total of 1426 glioma images, 708 meningioma images, 930 pituitary tumour images, and 396 normal brain images. The reported results showed that an accuracy of 93.1% was achieved from the finely tuned Google-Net model. However, the synergy of Google-Net as a feature extractor with an SVM classifier improved recognition accuracy to 98.1%.

13.
Sensors (Basel) ; 21(22)2021 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-34833594

RESUMO

The Industrial Internet of Things (IIoT) refers to the use of smart sensors, actuators, fast communication protocols, and efficient cybersecurity mechanisms to improve industrial processes and applications. In large industrial networks, smart devices generate large amounts of data, and thus IIoT frameworks require intelligent, robust techniques for big data analysis. Artificial intelligence (AI) and deep learning (DL) techniques produce promising results in IIoT networks due to their intelligent learning and processing capabilities. This survey article assesses the potential of DL in IIoT applications and presents a brief architecture of IIoT with key enabling technologies. Several well-known DL algorithms are then discussed along with their theoretical backgrounds and several software and hardware frameworks for DL implementations. Potential deployments of DL techniques in IIoT applications are briefly discussed. Finally, this survey highlights significant challenges and future directions for future research endeavors.


Assuntos
Aprendizado Profundo , Internet das Coisas , Inteligência Artificial , Segurança Computacional , Indústrias
14.
Sensors (Basel) ; 21(2)2021 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-33477526

RESUMO

Transcranial magnetic stimulation (TMS) excites neurons in the cortex, and neural activity can be simultaneously recorded using electroencephalography (EEG). However, TMS-evoked EEG potentials (TEPs) do not only reflect transcranial neural stimulation as they can be contaminated by artifacts. Over the last two decades, significant developments in EEG amplifiers, TMS-compatible technology, customized hardware and open source software have enabled researchers to develop approaches which can substantially reduce TMS-induced artifacts. In TMS-EEG experiments, various physiological and external occurrences have been identified and attempts have been made to minimize or remove them using online techniques. Despite these advances, technological issues and methodological constraints prevent straightforward recordings of early TEPs components. To the best of our knowledge, there is no review on both TMS-EEG artifacts and EEG technologies in the literature to-date. Our survey aims to provide an overview of research studies in this field over the last 40 years. We review TMS-EEG artifacts, their sources and their waveforms and present the state-of-the-art in EEG technologies and front-end characteristics. We also propose a synchronization toolbox for TMS-EEG laboratories. We then review subject preparation frameworks and online artifacts reduction maneuvers for improving data acquisition and conclude by outlining open challenges and future research directions in the field.


Assuntos
Artefatos , Estimulação Magnética Transcraniana , Eletroencefalografia , Potenciais Evocados , Tecnologia
15.
Sensors (Basel) ; 21(6)2021 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-33801002

RESUMO

Machine learning (ML)-based algorithms are playing an important role in cancer diagnosis and are increasingly being used to aid clinical decision-making. However, these commonly operate as 'black boxes' and it is unclear how decisions are derived. Recently, techniques have been applied to help us understand how specific ML models work and explain the rational for outputs. This study aims to determine why a given type of cancer has a certain phenotypic characteristic. Cancer results in cellular dysregulation and a thorough consideration of cancer regulators is required. This would increase our understanding of the nature of the disease and help discover more effective diagnostic, prognostic, and treatment methods for a variety of cancer types and stages. Our study proposes a novel explainable analysis of potential biomarkers denoting tumorigenesis in non-small cell lung cancer. A number of these biomarkers are known to appear following various treatment pathways. An enhanced analysis is enabled through a novel mathematical formulation for the regulators of mRNA, the regulators of ncRNA, and the coupled mRNA-ncRNA regulators. Temporal gene expression profiles are approximated in a two-dimensional spatial domain for the transition states before converging to the stationary state, using a system comprised of coupled-reaction partial differential equations. Simulation experiments demonstrate that the proposed mathematical gene-expression profile represents a best fit for the population abundance of these oncogenes. In future, our proposed solution can lead to the development of alternative interpretable approaches, through the application of ML models to discover unknown dynamics in gene regulatory systems.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Algoritmos , Difusão , Perfilação da Expressão Gênica , Humanos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/genética
16.
Sensors (Basel) ; 22(1)2021 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-35009675

RESUMO

Until now, clinicians are not able to evaluate the Psychogenic Non-Epileptic Seizures (PNES) from the rest-electroencephalography (EEG) readout. No EEG marker can help differentiate PNES cases from healthy subjects. In this paper, we have investigated the power spectrum density (PSD), in resting-state EEGs, to evaluate the abnormalities in PNES affected brains. Additionally, we have used functional connectivity tools, such as phase lag index (PLI), and graph-derived metrics to better observe the integration of distributed information of regular and synchronized multi-scale communication within and across inter-regional brain areas. We proved the utility of our method after enrolling a cohort study of 20 age- and gender-matched PNES and 19 healthy control (HC) subjects. In this work, three classification models, namely support vector machine (SVM), linear discriminant analysis (LDA), and Multilayer perceptron (MLP), have been employed to model the relationship between the functional connectivity features (rest-HC versus rest-PNES). The best performance for the discrimination of participants was obtained using the MLP classifier, reporting a precision of 85.73%, a recall of 86.57%, an F1-score of 78.98%, and, finally, an accuracy of 91.02%. In conclusion, our results hypothesized two main aspects. The first is an intrinsic organization of functional brain networks that reflects a dysfunctional level of integration across brain regions, which can provide new insights into the pathophysiological mechanisms of PNES. The second is that functional connectivity features and MLP could be a promising method to classify rest-EEG data of PNES form healthy controls subjects.


Assuntos
Eletroencefalografia , Convulsões , Estudos de Coortes , Humanos , Aprendizado de Máquina , Descanso
17.
Sci Rep ; 14(1): 10898, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38740843

RESUMO

Distributed denial-of-service (DDoS) attacks persistently proliferate, impacting individuals and Internet Service Providers (ISPs). Deep learning (DL) models are paving the way to address these challenges and the dynamic nature of potential threats. Traditional detection systems, relying on signature-based techniques, are susceptible to next-generation malware. Integrating DL approaches in cloud-edge/federated servers enhances the resilience of these systems. In the Internet of Things (IoT) and autonomous networks, DL, particularly federated learning, has gained prominence for attack detection. Unlike conventional models (centralized and localized DL), federated learning does not require access to users' private data for attack detection. This approach is gaining much interest in academia and industry due to its deployment on local and global cloud-edge models. Recent advancements in DL enable training a quality cloud-edge model across various users (collaborators) without exchanging personal information. Federated learning, emphasizing privacy preservation at the cloud-edge terminal, holds significant potential for facilitating privacy-aware learning among collaborators. This paper addresses: (1) The deployment of an optimized deep neural network for network traffic classification. (2) The coordination of federated server model parameters with training across devices in IoT domains. A federated flowchart is proposed for training and aggregating local model updates. (3) The generation of a global model at the cloud-edge terminal after multiple rounds between domains and servers. (4) Experimental validation on the BoT-IoT dataset demonstrates that the federated learning model can reliably detect attacks with efficient classification, privacy, and confidentiality. Additionally, it requires minimal memory space for storing training data, resulting in minimal network delay. Consequently, the proposed framework outperforms both centralized and localized DL models, achieving superior performance.

18.
PLoS One ; 19(6): e0299666, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38905163

RESUMO

Computer networks face vulnerability to numerous attacks, which pose significant threats to our data security and the freedom of communication. This paper introduces a novel intrusion detection technique that diverges from traditional methods by leveraging Recurrent Neural Networks (RNNs) for both data preprocessing and feature extraction. The proposed process is based on the following steps: (1) training the data using RNNs, (2) extracting features from their hidden layers, and (3) applying various classification algorithms. This methodology offers significant advantages and greatly differs from existing intrusion detection practices. The effectiveness of our method is demonstrated through trials on the Network Security Laboratory (NSL) and Canadian Institute for Cybersecurity (CIC) 2017 datasets, where the application of RNNs for intrusion detection shows substantial practical implications. Specifically, we achieved accuracy scores of 99.6% with Decision Tree, Random Forest, and CatBoost classifiers on the NSL dataset, and 99.8% and 99.9%, respectively, on the CIC 2017 dataset. By reversing the conventional sequence of training data with RNNs and then extracting features before applying classification algorithms, our approach provides a major shift in intrusion detection methodologies. This modification in the pipeline underscores the benefits of utilizing RNNs for feature extraction and data preprocessing, meeting the critical need to safeguard data security and communication freedom against ever-evolving network threats.


Assuntos
Algoritmos , Segurança Computacional , Redes Neurais de Computação , Humanos , Redes de Comunicação de Computadores
19.
Heliyon ; 10(8): e29396, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38665569

RESUMO

Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset.

20.
Heliyon ; 9(11): e21624, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37954270

RESUMO

Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEExtreme Challenge competition as a benchmark-a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA