Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
PLoS One ; 19(9): e0308206, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39264944

RESUMEN

In response to the rapidly evolving threat landscape in network security, this paper proposes an Evolutionary Machine Learning Algorithm designed for robust intrusion detection. We specifically address challenges such as adaptability to new threats and scalability across diverse network environments. Our approach is validated using two distinct datasets: BoT-IoT, reflecting a range of IoT-specific attacks, and UNSW-NB15, offering a broader context of network intrusion scenarios using GA based hybrid DT-SVM. This selection facilitates a comprehensive evaluation of the algorithm's effectiveness across varying attack vectors. Performance metrics including accuracy, recall, and false positive rates are meticulously chosen to demonstrate the algorithm's capability to accurately identify and adapt to both known and novel threats, thereby substantiating the algorithm's potential as a scalable and adaptable security solution. This study aims to advance the development of intrusion detection systems that are not only reactive but also preemptively adaptive to emerging cyber threats." During the feature selection step, a GA is used to discover and preserve the most relevant characteristics from the dataset by using evolutionary principles. Through the use of this technology based on genetic algorithms, the subset of features is optimised, enabling the subsequent classification model to focus on the most relevant components of network data. In order to accomplish this, DT-SVM classification and GA-driven feature selection are integrated in an effort to strike a balance between efficiency and accuracy. The system has been purposefully designed to efficiently handle data streams in real-time, ensuring that intrusions are promptly and precisely detected. The empirical results corroborate the study's assertion that the IDS outperforms traditional methodologies.


Asunto(s)
Algoritmos , Seguridad Computacional , Aprendizaje Automático , Humanos
2.
Artículo en Inglés | MEDLINE | ID: mdl-39018208

RESUMEN

In medical diagnostics, the accurate classification and analysis of biomedical signals play a crucial role, particularly in the diagnosis of neurological disorders such as epilepsy. Electroencephalogram (EEG) signals, which represent the electrical activity of the brain, are fundamental in identifying epileptic seizures. However, challenges such as data scarcity and imbalance significantly hinder the development of robust diagnostic models. Addressing these challenges, in this paper, we explore enhancing medical signal processing and diagnosis, with a focus on epilepsy classification through EEG signals, by harnessing AI-generated content techniques. We introduce a novel framework that utilizes generative adversarial networks for the generation of synthetic EEG signals to augment existing datasets, thereby mitigating issues of data scarcity and imbalance. Furthermore, we incorporate an attention-based temporal convolutional network model to efficiently process and classify EEG signals by emphasizing salient features crucial for accurate diagnosis. Our comprehensive evaluation, including rigorous ablation studies, is conducted on the widely recognized Bonn Epilepsy Data. The results achieves an accuracy of 98.89% and F1 score of 98.91%. The findings demonstrate substantial improvements in epilepsy classification accuracy, showcasing the potential of AI-generated content in advancing the field of medical signal processing and diagnosis.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38829758

RESUMEN

The Internet of Medical Things (IoMT) has transformed traditional healthcare systems by enabling real-time monitoring, remote diagnostics, and data-driven treatment. However, security and privacy remain significant concerns for IoMT adoption due to the sensitive nature of medical data. Therefore, we propose an integrated framework leveraging blockchain and explainable artificial intelligence (XAI) to enable secure, intelligent, and transparent management of IoMT data. First, the traceability and tamper-proof of blockchain are used to realize the secure transaction of IoMT data, transforming the secure transaction of IoMT data into a two-stage Stackelberg game. The dual-chain architecture is used to ensure the security and privacy protection of the transaction. The main-chain manages regular IoMT data transactions, while the side-chain deals with data trading activities aimed at resale. Simultaneously, the perceptual hash technology is used to realize data rights confirmation, which maximally protects the rights and interests of each participant in the transaction. Subsequently, medical time-series data is modeled using bidirectional simple recurrent units to detect anomalies and cyberthreats accurately while overcoming vanishing gradients. Lastly, an adversarial sample generation method based on local interpretable model-agnostic explanations is provided to evaluate, secure, and improve the anomaly detection model, as well as to make it more explainable and resilient to possible adversarial attacks. Simulation results are provided to illustrate the high performance of the integrated secure data management framework leveraging blockchain and XAI, compared with the benchmarks.

4.
Sci Rep ; 14(1): 10898, 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740843

RESUMEN

Distributed denial-of-service (DDoS) attacks persistently proliferate, impacting individuals and Internet Service Providers (ISPs). Deep learning (DL) models are paving the way to address these challenges and the dynamic nature of potential threats. Traditional detection systems, relying on signature-based techniques, are susceptible to next-generation malware. Integrating DL approaches in cloud-edge/federated servers enhances the resilience of these systems. In the Internet of Things (IoT) and autonomous networks, DL, particularly federated learning, has gained prominence for attack detection. Unlike conventional models (centralized and localized DL), federated learning does not require access to users' private data for attack detection. This approach is gaining much interest in academia and industry due to its deployment on local and global cloud-edge models. Recent advancements in DL enable training a quality cloud-edge model across various users (collaborators) without exchanging personal information. Federated learning, emphasizing privacy preservation at the cloud-edge terminal, holds significant potential for facilitating privacy-aware learning among collaborators. This paper addresses: (1) The deployment of an optimized deep neural network for network traffic classification. (2) The coordination of federated server model parameters with training across devices in IoT domains. A federated flowchart is proposed for training and aggregating local model updates. (3) The generation of a global model at the cloud-edge terminal after multiple rounds between domains and servers. (4) Experimental validation on the BoT-IoT dataset demonstrates that the federated learning model can reliably detect attacks with efficient classification, privacy, and confidentiality. Additionally, it requires minimal memory space for storing training data, resulting in minimal network delay. Consequently, the proposed framework outperforms both centralized and localized DL models, achieving superior performance.

5.
J Neurosci Methods ; 408: 110159, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38723868

RESUMEN

BACKGROUND: In order to push the frontiers of brain-computer interface (BCI) and neuron-electronics, this research presents a novel framework that combines cutting-edge technologies for improved brain-related diagnostics in smart healthcare. This research offers a ground-breaking application of transparent strategies to BCI, promoting openness and confidence in brain-computer interactions and taking inspiration from Grad-CAM (Gradient-weighted Class Activation Mapping) based Explainable Artificial Intelligence (XAI) methodology. The landscape of healthcare diagnostics is about to be redefined by the integration of various technologies, especially when it comes to illnesses related to the brain. NEW METHOD: A novel approach has been proposed in this study comprising of Xception architecture which is trained on imagenet database following transfer learning process for extraction of significant features from magnetic resonance imaging dataset acquired from publicly available distinct sources as an input and linear support vector machine has been used for distinguishing distinct classes.Afterwards, gradient-weighted class activation mapping has been deployed as the foundation for explainable artificial intelligence (XAI) for generating informative heatmaps, representing spatial localization of features which were focused to achieve model's predictions. RESULTS: Thus, the proposed model not only provides accurate outcomes but also provides transparency for the predictions generated by the Xception network to diagnose presence of abnormal tissues and avoids overfitting issues. Hyperparameters along with performance-metrics are also obtained while validating the proposed network on unseen brain MRI scans to ensure effectiveness of the proposed network. COMPARISON WITH EXISTING METHODS AND CONCLUSIONS: The integration of Grad-CAM based explainable artificial intelligence with deep neural network namely Xception offers a significant impact in diagnosing brain tumor disease while highlighting the specific regions of input brain MRI images responsible for making predictions. In this study, the proposed network results in 98.92% accuracy, 98.15% precision, 99.09% sensitivity, 98.18% specificity and 98.91% dice-coefficient while identifying presence of abnormal tissues in the brain. Thus, Xception model trained on distinct dataset following transfer learning process offers remarkable diagnostic accuracy and linear support vector act as a classifier to provide efficient classification among distinct classes. In addition, the deployed explainable artificial intelligence approach helps in revealing the reasoning behind predictions made by deep neural network having black-box nature and provides a clear perspective to assist medical experts in achieving trustworthiness and transparency while diagnosing brain tumor disease in the smart healthcare.


Asunto(s)
Inteligencia Artificial , Interfaces Cerebro-Computador , Encéfalo , Imagen por Resonancia Magnética , Máquina de Vectores de Soporte , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Redes Neurales de la Computación
7.
Sensors (Basel) ; 23(21)2023 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-37960536

RESUMEN

Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) have emerged as transforming technologies, bringing the potential to revolutionize a wide range of industries such as environmental monitoring, agriculture, manufacturing, smart health, home automation, wildlife monitoring, and surveillance. Population expansion, changes in the climate, and resource constraints all offer problems to modern IoT applications. To solve these issues, the integration of Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) has come forth as a game-changing solution. For example, in agricultural environment, IoT-based WSN has been utilized to monitor yield conditions and automate agriculture precision through different sensors. These sensors are used in agriculture environments to boost productivity through intelligent agricultural decisions and to collect data on crop health, soil moisture, temperature monitoring, and irrigation. However, sensors have finite and non-rechargeable batteries, and memory capabilities, which might have a negative impact on network performance. When a network is distributed over a vast area, the performance of WSN-assisted IoT suffers. As a result, building a stable and energy-efficient routing infrastructure is quite challenging in order to extend network lifetime. To address energy-related issues in scalable WSN-IoT environments for future IoT applications, this research proposes EEDC: An Energy Efficient Data Communication scheme by utilizing "Region based Hierarchical Clustering for Efficient Routing (RHCER)"-a multi-tier clustering framework for energy-aware routing decisions. The sensors deployed for IoT application data collection acquire important data and select cluster heads based on a multi-criteria decision function. Further, to ensure efficient long-distance communication along with even load distribution across all network nodes, a subdivision technique was employed in each tier of the proposed framework. The proposed routing protocol aims to provide network load balancing and convert communicating over long distances into shortened multi-hop distance communications, hence enhancing network lifetime.The performance of EEDC is compared to that of some existing energy-efficient protocols for various parameters. The simulation results show that the suggested methodology reduces energy usage by almost 31% in sensor nodes and provides almost 38% improved packet drop ratio.

8.
Sensors (Basel) ; 23(19)2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37837162

RESUMEN

The comparison of low-rank-based learning models for multi-label categorization of attacks for intrusion detection datasets is presented in this work. In particular, we investigate the performance of three low-rank-based machine learning (LR-SVM) and deep learning models (LR-CNN), (LR-CNN-MLP) for classifying intrusion detection data: Low Rank Representation (LRR) and Non-negative Low Rank Representation (NLR). We also look into how these models' performance is affected by hyperparameter tweaking by using Guassian Bayes Optimization. The tests has been run on merging two intrusion detection datasets that are available to the public such as BoT-IoT and UNSW- NB15 and assess the models' performance in terms of key evaluation criteria, including precision, recall, F1 score, and accuracy. Nevertheless, all three models perform noticeably better after hyperparameter modification. The selection of low-rank-based learning models and the significance of the hyperparameter tuning log for multi-label classification of intrusion detection data have been discussed in this work. A hybrid security dataset is used with low rank factorization in addition to SVM, CNN and CNN-MLP. The desired multilabel results have been obtained by considering binary and multi-class attack classification as well. Low rank CNN-MLP achieved suitable results in multilabel classification of attacks. Also, a Gaussian-based Bayesian optimization algorithm is used with CNN-MLP for hyperparametric tuning and the desired results have been achieved using c and γ for SVM and α and ß for CNN and CNN-MLP on a hybrid dataset. The results show the label UDP is shared among analysis, DoS and shellcode. The accuracy of classifying UDP among three classes is 98.54%.

9.
Sensors (Basel) ; 23(16)2023 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-37631793

RESUMEN

Predicting attacks in Android malware devices using machine learning for recommender systems-based IoT can be a challenging task. However, it is possible to use various machine-learning techniques to achieve this goal. An internet-based framework is used to predict and recommend Android malware on IoT devices. As the prevalence of Android devices grows, the malware creates new viruses on a regular basis, posing a threat to the central system's security and the privacy of the users. The suggested system uses static analysis to predict the malware in Android apps used by consumer devices. The training of the presented system is used to predict and recommend malicious devices to block them from transmitting the data to the cloud server. By taking into account various machine-learning methods, feature selection is performed and the K-Nearest Neighbor (KNN) machine-learning model is proposed. Testing was carried out on more than 10,000 Android applications to check malicious nodes and recommend that the cloud server block them. The developed model contemplated all four machine-learning algorithms in parallel, i.e., naive Bayes, decision tree, support vector machine, and the K-Nearest Neighbor approach and static analysis as a feature subset selection algorithm, and it achieved the highest prediction rate of 93% to predict the malware in real-world applications of consumer devices to minimize the utilization of energy. The experimental results show that KNN achieves 93%, 95%, 90%, and 92% accuracy, precision, recall and f1 measures, respectively.

10.
Sci Rep ; 13(1): 12814, 2023 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-37550355

RESUMEN

The real world applications are more prone to difficulties of challenges due to fast growth of technologies and inclusion of artificial intelligence (AI) based logical solutions. The massive internet-of-things (IoT) devices are involved in number of Industry 5.0 applications like smart healthcare, smart manufacturing, smart agriculture, smart transportation. Advanced wireless techniques, customization of services and different technologies are experiencing a major transformation. The desire to increase the communication reliability without adding energy overhead is the major challenge for massive IoT enabled networks. To cope up with the above challenges, Industry 5.0 requirements needs to be monitored at the remote level which again adds on the communication challenge. Use of relays in 6G based wireless networks is denied due to high requirement of energy. Therefore in this paper, Intelligent reflecting surfaces (IRSs) assisted energy constrained 6G wireless networks are studied. To provide seamless connection between the communicating mobile nodes, IRS with an array of reflecting elements are configured in the system set up. A use-case scenario of IRS enabled network in Internet-of-Underwater things (IoUT) for smart ocean transportation is also provided. The IRS assisted wireless network is evaluated for target rates achieved. A power consumption model of the IRS supported system is also proposed to optimise the energy efficiency of the system. Further, the paper evaluates the impact of number of reflecting elements N on the IRS and the phase resolution b of each element on the system performance. The energy efficiency improves by 20% for IRS with [Formula: see text] with [Formula: see text] over IRS with [Formula: see text].

11.
Sensors (Basel) ; 23(13)2023 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-37447769

RESUMEN

Most data nowadays are stored in the cloud; therefore, cloud computing and its extension-fog computing-are the most in-demand services at the present time. Cloud and fog computing platforms are largely used by Internet of Things (IoT) applications where various mobile devices, end users, PCs, and smart objects are connected to each other via the internet. IoT applications are common in several application areas, such as healthcare, smart cities, industries, logistics, agriculture, and many more. Due to this, there is an increasing need for new security and privacy techniques, with attribute-based encryption (ABE) being the most effective among them. ABE provides fine-grained access control, enables secure storage of data on unreliable storage, and is flexible enough to be used in different systems. In this paper, we survey ABE schemes, their features, methodologies, benefits/drawbacks, attacks on ABE, and how ABE can be used with IoT and its applications. This survey reviews ABE models suitable for IoT platforms, taking into account the desired features and characteristics. We also discuss various performance indicators used for ABE and how they affect efficiency. Furthermore, some selected schemes are analyzed through simulation to compare their efficiency in terms of different performance indicators. As a result, we find that some schemes simultaneously perform well in one or two performance indicators, whereas none shines in all of them at once. The work will help researchers identify the characteristics of different ABE schemes quickly and recognize whether they are suitable for specific IoT applications. Future work that may be helpful for ABE is also discussed.


Asunto(s)
Seguridad Computacional , Internet de las Cosas , Privacidad , Nube Computacional , Atención a la Salud
12.
Sensors (Basel) ; 23(13)2023 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-37448038

RESUMEN

By definition, the aggregating methodology ensures that transmitted data remain visible in clear text in the aggregated units or nodes. Data transmission without encryption is vulnerable to security issues such as data confidentiality, integrity, authentication and attacks by adversaries. On the other hand, encryption at each hop requires extra computation for decrypting, aggregating, and then re-encrypting the data, which results in increased complexity, not only in terms of computation but also due to the required sharing of keys. Sharing the same key across various nodes makes the security more vulnerable. An alternative solution to secure the aggregation process is to provide an end-to-end security protocol, wherein intermediary nodes combine the data without decoding the acquired data. As a consequence, the intermediary aggregating nodes do not have to maintain confidential key values, enabling end-to-end security across sensor devices and base stations. This research presents End-to-End Homomorphic Encryption (EEHE)-based safe and secure data gathering in IoT-based Wireless Sensor Networks (WSNs), whereby it protects end-to-end security and enables the use of aggregator functions such as COUNT, SUM and AVERAGE upon encrypted messages. Such an approach could also employ message authentication codes (MAC) to validate data integrity throughout data aggregation and transmission activities, allowing fraudulent content to also be identified as soon as feasible. Additionally, if data are communicated across a WSN, then there is a higher likelihood of a wormhole attack within the data aggregation process. The proposed solution also ensures the early detection of wormhole attacks during data aggregation.


Asunto(s)
Seguridad Computacional , Agregación de Datos , Redes de Comunicación de Computadores , Algoritmos , Confidencialidad
13.
Comput Intell Neurosci ; 2023: 4563145, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36909977

RESUMEN

Social media platforms play a key role in fostering the outreach of extremism by influencing the views, opinions, and perceptions of people. These platforms are increasingly exploited by extremist elements for spreading propaganda, radicalizing, and recruiting youth. Hence, research on extremism detection on social media platforms is essential to curb its influence and ill effects. A study of existing literature on extremism detection reveals that it is restricted to a specific ideology, binary classification with limited insights on extremism text, and manual data validation methods to check data quality. In existing research studies, researchers have used datasets limited to a single ideology. As a result, they face serious issues such as class imbalance, limited insights with class labels, and a lack of automated data validation methods. A major contribution of this work is a balanced extremism text dataset, versatile with multiple ideologies verified by robust data validation methods for classifying extremism text into popular extremism types such as propaganda, radicalization, and recruitment. The presented extremism text dataset is a generalization of multiple ideologies such as the standard ISIS dataset, GAB White Supremacist dataset, and recent Twitter tweets on ISIS and white supremacist ideology. The dataset is analyzed to extract features for the three focused classes in extremism with TF-IDF unigram, bigrams, and trigrams features. Additionally, pretrained word2vec features are used for semantic analysis. The extracted features in the proposed dataset are evaluated using machine learning classification algorithms such as multinomial Naïve Bayes, support vector machine, random forest, and XGBoost algorithms. The best results were achieved by support vector machine using the TF-IDF unigram model confirming 0.67 F1 score. The proposed multi-ideology and multiclass dataset shows comparable performance to the existing datasets limited to single ideology and binary labels.


Asunto(s)
Algoritmos , Medios de Comunicación Sociales , Humanos , Adolescente , Teorema de Bayes , Aprendizaje Automático , Bosques Aleatorios
14.
Sci Rep ; 13(1): 3614, 2023 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-36869106

RESUMEN

Vehicular Content Networks (VCNs) represent key empowering solution for content distribution in fully distributed manner for vehicular infotainment applications. In VCN, both on board unit (OBU) of each vehicle and road side units (RSUs) facilitate content caching to support timely content delivery for moving vehicles when requested. However, due to limited caching capacity available at both RSUs and OBUs, only selected content can be cached. Moreover, the contents being demanded in vehicular infotainment applications are transient in nature. The transient content caching in vehicular content networks with the use of edge communication for delay free services is fundamental issue and need to get addressed (Yang et al. in ICC 2022-IEEE international conference on communications. IEEE, pp 1-6, 2022). Therefore, this study focuses on edge communication in VCNs by firstly organizing a region based classification for vehicular network components including RSUs and OBUs. Secondly, a theoretical model is designed for each vehicle to decide its content fetching location (i.e. either RSU or OBU) in current region or neighboring region. Further, the caching of transient contents inside vehicular network components (such as RSU, OBU) is based on content caching probability. Finally, the proposed scheme is evaluated under different network condition in Icarus simulator for various performance parameters. The simulation results proved outstanding performance of the proposed approach over various state of art caching strategies.

15.
ISA Trans ; 132: 52-60, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36154778

RESUMEN

With changing times, the need for security increases in all fields, whether we talk about cloud networks or vehicular networks. In every place, it has its importance, but in vehicular networks where the lives of human beings are involved, security becomes the topmost priority. Therefore, this article aims to shed light on Misbehavior Detection Framework (MDF) used in the Cooperative Intelligent Transport Systems community. Here, MDF keeps an eye on malicious entities on the roads. It is done by regularly evaluating two main checks: consistency and local plausibility. These checks are done by Intelligent Transport System Stations. All the messages received through Vehicle-to-Everything are scrutinized through this model. After that, all the messages are evaluated by local detection mechanisms to decide the holistic message's plausibility. This article mainly focuses on the logic behind the proposed Misbehavior Detection Framework providing more security, evaluating various Machine Learning-based models to ensure one best out of all based on quality and computation latency of all models along with the results of various parameters, such as Recall, Precision, F1 Score, Accuracy, Bookmaker Informedness, Markedness, Mathews Correlation Coefficient, Kappa, and achieved the best results.

16.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-36015869

RESUMEN

Wireless sensor networks (WSNs) have recently been viewed as the basic architecture that prepared the way for the Internet of Things (IoT) to arise. Nevertheless, when WSNs are linked with the IoT, a difficult issue arises due to excessive energy utilization in their nodes and short network longevity. As a result, energy constraints in sensor nodes, sensor data sharing and routing protocols are the fundamental topics in WSN. This research presents an enhanced smart-energy-efficient routing protocol (ESEERP) technique that extends the lifetime of the network and improves its connection to meet the aforementioned deficiencies. It selects the Cluster Head (CH) depending on an efficient optimization method derived from several purposes. It aids in the reduction of sleepy sensor nodes and decreases energy utilization. A Sail Fish Optimizer (SFO) is used to find an appropriate route to the sink node for data transfer following CH selection. Regarding energy utilization, bandwidth, packet delivery ratio and network longevity, the proposed methodology is mathematically studied, and the results have been compared to identical current approaches such as a Genetic algorithm (GA), Ant Lion optimization (ALO) and Particle Swarm Optimization (PSO). The simulation shows that in the proposed approach for the longevity of the network, there are 3500 rounds; energy utilization achieves a maximum of 0.5 Joules; bandwidth transmits the data at the rate of 0.52 MBPS; the packet delivery ratio (PDR) is at the rate of 96% for 500 nodes, respectively.


Asunto(s)
Redes de Comunicación de Computadores , Internet de las Cosas , Algoritmos , Animales , Conservación de los Recursos Energéticos , Tecnología Inalámbrica
17.
Comput Math Methods Med ; 2022: 8680737, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35983528

RESUMEN

Developments in medical care have inspired wide interest in the current decade, especially to their services to individuals living prolonged and healthier lives. Alzheimer's disease (AD) is the most chronic neurodegeneration and dementia-causing disorder. Economic expense of treating AD patients is expected to grow. The requirement of developing a computer-aided technique for early AD categorization becomes even more essential. Deep learning (DL) models offer numerous benefits against machine learning tools. Several latest experiments that exploited brain magnetic resonance imaging (MRI) scans and convolutional neural networks (CNN) for AD classification showed promising conclusions. CNN's receptive field aids in the extraction of main recognizable features from these MRI scans. In order to increase classification accuracy, a new adaptive model based on CNN and support vector machines (SVM) is presented in the research, combining both the CNN's capabilities in feature extraction and SVM in classification. The objective of this research is to build a hybrid CNN-SVM model for classifying AD using the MRI ADNI dataset. Experimental results reveal that the hybrid CNN-SVM model outperforms the CNN model alone, with relative improvements of 3.4%, 1.09%, 0.85%, and 2.82% on the testing dataset for AD vs. cognitive normal (CN), CN vs. mild cognitive impairment (MCI), AD vs. MCI, and CN vs. MCI vs. AD, respectively. Finally, the proposed approach has been further experimented on OASIS dataset leading to accuracy of 86.2%.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neuroimagen/métodos
18.
Sensors (Basel) ; 22(13)2022 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-35808508

RESUMEN

Cloud providers create a vendor-locked-in environment by offering proprietary and non-standard APIs, resulting in a lack of interoperability and portability among clouds. To overcome this deterrent, solutions must be developed to exploit multiple clouds efficaciously. This paper proposes a middleware platform to mitigate the application portability issue among clouds. A literature review is also conducted to analyze the solutions for application portability. The middleware allows an application to be ported on various platform-as-a-service (PaaS) clouds and supports deploying different services of an application on disparate clouds. The efficiency of the abstraction layer is validated by experimentation on an application that uses the message queue, Binary Large Objects (BLOB), email, and short message service (SMS) services of various clouds via the proposed middleware against the same application using these services via their native code. The experimental results show that adding this middleware mildly affects the latency, but it dramatically reduces the developer's overhead of implementing each service for different clouds to make it portable.


Asunto(s)
Programas Informáticos
19.
Sci Rep ; 12(1): 13013, 2022 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-35906269

RESUMEN

In Smart Cities' applications, Multi-node cooperative spectrum sensing (CSS) can boost spectrum sensing efficiency in cognitive wireless networks (CWN), although there is a non-linear interaction among number of nodes and sensing efficiency. Cooperative sensing by nodes with low computational cost is not favorable to improving sensing reliability and diminishes spectrum sensing energy efficiency, which poses obstacles to the regular operation of CWN. To enhance the evaluation and interpretation of nodes and resolves the difficulty of sensor selection in cognitive sensor networks for energy-efficient spectrum sensing. We examined reducing energy usage in smart cities while substantially boosting spectrum detecting accuracy. In optimizing energy effectiveness in spectrum sensing while minimizing complexity, we use the energy detection for spectrum sensing and describe the challenge of sensor selection. This article proposed the algorithm for choosing the sensing nodes while reducing the energy utilization and improving the sensing efficiency. All the information regarding nodes is saved in the fusion center (FC) through which blockchain encrypts the information of nodes ensuring that a node's trust value conforms to its own without any ambiguity, CWN-FC pick high-performance nodes to engage in CSS. The performance evaluation and computation results shows the comparison between various algorithms with the proposed approach which achieves 10% sensing efficiency in finding the solution for identification and triggering possibilities with the value of [Formula: see text] and [Formula: see text] with the varying number of nodes.


Asunto(s)
Cadena de Bloques , Redes de Comunicación de Computadores , Ciudades , Cognición , Reproducibilidad de los Resultados , Tecnología Inalámbrica
20.
Sensors (Basel) ; 22(12)2022 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-35746411

RESUMEN

New technologies and trends in industries have opened up ways for distributed establishment of Cyber-Physical Systems (CPSs) for smart industries. CPSs are largely based upon Internet of Things (IoT) because of data storage on cloud servers which poses many constraints due to the heterogeneous nature of devices involved in communication. Among other challenges, security is the most daunting challenge that contributes, at least in part, to the impeded momentum of the CPS realization. Designers assume that CPSs are themselves protected as they cannot be accessed from external networks. However, these days, CPSs have combined parts of the cyber world and also the physical layer. Therefore, cyber security problems are large for commercial CPSs because the systems move with one another and conjointly with physical surroundings, i.e., Complex Industrial Applications (CIA). Therefore, in this paper, a novel data security algorithm Dynamic Hybrid Secured Encryption Technique (DHSE) is proposed based on the hybrid encryption scheme of Advanced Encryption Standard (AES), Identity-Based Encryption (IBE) and Attribute-Based Encryption (ABE). The proposed algorithm divides the data into three categories, i.e., less sensitive, mid-sensitive and high sensitive. The data is distributed by forming the named-data packets (NDPs) via labelling the names. One can choose the number of rounds depending on the actual size of a key; it is necessary to perform a minimum of 10 rounds for 128-bit keys in DHSE. The average encryption time taken by AES (Advanced Encryption Standard), IBE (Identity-based encryption) and ABE (Attribute-Based Encryption) is 3.25 ms, 2.18 ms and 2.39 ms, respectively. Whereas the average time taken by the DHSE encryption algorithm is 2.07 ms which is very much less when compared to other algorithms. Similarly, the average decryption times taken by AES, IBE and ABE are 1.77 ms, 1.09 ms and 1.20 ms and the average times taken by the DHSE decryption algorithms are 1.07 ms, which is very much less when compared to other algorithms. The analysis shows that the framework is well designed and provides confidentiality of data with minimum encryption and decryption time. Therefore, the proposed approach is well suited for CPS-IoT.


Asunto(s)
Nube Computacional , Internet de las Cosas , Seguridad Computacional , Confidencialidad , Almacenamiento y Recuperación de la Información
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...