Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 20447, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39227381

RESUMO

Renewable energy sources are playing a leading role in today's world. However, integrating these sources into the distribution network through power electronic devices can lead to power quality (PQ) challenges. This work addresses PQ issues by utilizing a shunt active power filter in combination with an Energy Storage System (ESS), a Wind Energy Generation System (WEGS), and a Solar Energy System. While most previous research has relied on complex methods like the synchronous reference frame (SRF) and active-reactive power (pq) approaches, this work proposes a simplified approach by using a neural network (NN) for generating reference signals, along with the design of a five-level reduced switch voltage source converter. The gain values of the proportional-integral controller (PIC), as well as the parameters for the shunt filter, boost, and buck-boost converters in the WEGS and ESS, are optimally selected using the horse herd optimization algorithm. Additionally, the weights and biases for the neural network (NN) are also determined using this method. The proposed system aims to achieve three key objectives: (1) stabilizing the voltage across the DC bus capacitor; (2) reducing total harmonic distortion (THD) and improving the power factor; and (3) ensuring superior performance under varying demand and PV irradiation conditions. The system's effectiveness is evaluated through three different testing scenarios, with results compared against those obtained using the genetic algorithm, biogeography-based optimization (BBO), as well as conventional SRF and pq methods with PIC. The results clearly demonstrate that the proposed method achieves THD values of 3.69%, 3.76%, and 4.0%, which are lower than those of the other techniques and well within IEEE standards. The method was developed using MATLAB/Simulink version 2022b.

2.
Sci Rep ; 14(1): 21795, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39294258

RESUMO

In this work, a new kind of charge scheduling algorithm is proposed by utilizing the War Strategy Optimization (WSO) algorithm. The strategies used in the war such as attack, defense, assigning soldiers to take positions are the inspiration to this algorithm. The proposed WSO algorithm is validated in a constructed geographic area which consists of Six starting/destination points, sixteen nodes, and twelve charging stations. In terms of waiting time and charging cost, the experimental results show that the WSO method much improves over current methods. The average waiting time and average charging cost of EVs are validated in MATLAB, with different considerations such as different number of EVs varied from 25 to 100, and different number of charging piles varied from 1 to 4. The WSO algorithm specifically lowered charging costs by up to 13.67% compared to the same and waiting time by up to 83.25% relative to the First Come First Serve algorithm. Comparatively to the Chaotic Harris Hawk Optimization and Harris Hawk Optimization algorithms, the WSO method demonstrated declines in waiting time by 11.17% and 39.09%, respectively, and declines in charging costs by 3.61% and 12.45%, respectively. Especially in situations with limited charging infrastructure, these findings show that the WSO algorithm may improve the efficiency and cost-effectiveness of EV charging management systems. For real-world EV charging management systems, the method's capacity to efficiently allocate EVs among charging stations, lower waiting times, and lower charging costs makes it a potential solution.

3.
Sci Rep ; 14(1): 20622, 2024 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232053

RESUMO

Alzheimer's Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented "Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)" is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer's detection in scale of 2-5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to "Residual Attention Network (RAN)", which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the "Attention-based Bi-LSTM". The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Memória de Curto Prazo/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Redes Neurais de Computação , Idoso
4.
Sci Rep ; 14(1): 22188, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333598

RESUMO

In the air-to-ground transmissions, the lifespan of the network is based on the "unmanned aerial vehicle's (UAV)" life span because of the limited battery capacity. Thus, the enhancement of energy efficiency and the outage of the ground candidate's minimization are significant factors of the network functionality. UAV-aided transmission can highly enhance the spectrum efficacy and coverage. Because of their flexible deployment and the high maneuverability, the UAVs can be the best alternative for the situations where the "Internet of Things (IoT)" systems utilize more energy to attain the essential information rate, when they are far away from the terrestrial base station. Therefore, it is significant to win over the few troubles in the conventional UAV-aided efficiency approaches. Thus, this proposed work is aimed to design an innovative energy efficiency framework in the UAV-assisted network using a reinforcement learning mechanism. The energy efficiency optimization in the UAV offers better wireless coverage to the static and mobile ground user. Presently, reinforcement learning techniques effectively optimize the energy efficiency rate of the system by employing the 2D trajectory mechanism, which effectively removes the interference rate attained in the nearby UAV cells. The main objective of the recommended framework is to maximize the energy efficiency rate of the UAV network by performing the joint optimization using UAV 3D trajectory, with the energy utilized during interference accounting, and connected user counts. Hence, an efficient Adaptive Deep Reinforcement Learning with Novel Loss Function (ADRL-NLF) framework is designed to provide a better energy efficiency rate to the UAV network. Moreover, the parameter of ADRL is tuned using the Hybrid Energy Valley and Hermit Crab (HEVHC) algorithm. Various experimental observations are performed to observe the effectualness rate of the recommended energy efficiency model for UAV-based networks over the classical energy efficiency framework in UAV Networks.

6.
Sci Rep ; 14(1): 21483, 2024 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-39277644

RESUMO

Maternal health risks can cause a range of complications for women during pregnancy. High blood pressure, abnormal glucose levels, depression, anxiety, and other maternal health conditions can all lead to pregnancy complications. Proper identification and monitoring of risk factors can assist to reduce pregnancy complications. The primary goal of this research is to use real-world datasets to identify and predict Maternal Health Risk (MHR) factors. As a result, we developed and implemented the Quad-Ensemble Machine Learning framework to predict Maternal Health Risk Classification (QEML-MHRC). The methodology used a vacxsriety of Machine Learning (ML) models, which then integrated with four ensemble ML techniques to improve prediction. The dataset collected from various maternity hospitals and clinics subjected to nineteen training and testing tests. According to the exploratory data analysis, the most significant risk factors for pregnant women include high blood pressure, low blood pressure, and high blood sugar levels. The study proposed a novel approach to dealing with high-risk factors linked to maternal health. Dealing with class-specific performance elaborated further to properly understand the distinction between high, low, and medium risks. All tests yielded outstanding results when predicting the amount of risk during pregnancy. In terms of class performance, the dataset associated with the "HR" class outperformed the others, predicting 90% correctly. GBT with ensemble stacking outperformed and demonstrated remarkable performance for all evaluation measure (0.86) across all classes in the dataset. The key success of the models used in this work is the ability to measure model performance using a class-wise distribution. The proposed approach can help medical experts assess maternal health risks, saving lives and preventing complications throughout pregnancy. The prediction approach presented in this study can detect high-risk pregnancies early on, allowing for timely intervention and treatment. This study's development and findings have the potential to raise public awareness of maternal health issues.


Assuntos
Aprendizado de Máquina , Saúde Materna , Complicações na Gravidez , Humanos , Gravidez , Feminino , Fatores de Risco , Complicações na Gravidez/epidemiologia , Medição de Risco/métodos , Adulto
7.
Sci Rep ; 14(1): 21532, 2024 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-39278954

RESUMO

The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.


Assuntos
Computação em Nuvem , Segurança Computacional , Internet das Coisas , Humanos , Heurística , Algoritmos , Atenção à Saúde , Redes de Comunicação de Computadores
8.
Sci Rep ; 14(1): 18437, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39117706

RESUMO

In many emerging nations, rapid industrialization and urbanization have led to heightened levels of air pollution. This sudden rise in air pollution, which affects global sustainability and human health, has become a significant concern for citizens and governments. While most current methods for predicting air quality rely on shallow models and often yield unsatisfactory results, our study explores a deep architectural model for forecasting air quality. We employ a sophisticated deep learning structure to develop an advanced system for ambient air quality prediction. We utilize three publicly available databases and real-world data to obtain accurate air quality measurements. These four datasets undergo a data cleaning to yield a consolidated, cleaned dataset. Subsequently, the Fused Eurasian Oystercatcher-Pathfinder Algorithm (FEO-PFA)-a dual optimization method combining the Eurasian Oystercatcher Optimizer (EOO) and Pathfinder Algorithm (PFA)-is applied. This method aids in selecting weighted features, optimizing weights, and choosing the most relevant attributes for optimal results. These optimal features are then incorporated into the Multiscale Depth-wise Separable Adaptive Temporal Convolutional Network (MDS-ATCN) for the ambient Air Quality Prediction (AQP) process. The variables within MDS-ATCN are further refined using the proposed FEO-PFA to enhance predictive accuracy. An empirical analysis is performed to compare the efficacy of our proposed model with traditional methods, underscoring the superior effectiveness of our approach. The average cost function is reduced to 5.5%, the MAE to 28%, and the RMSE to 14% by the suggested method, according to the performance research conducted with regard to all datasets.

9.
Sci Rep ; 14(1): 19221, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39160200

RESUMO

High-power converters with significant gains represent established configurations that hold appeal for applications in the industrial and commercial sectors, such as fuel cell electric vehicles (FCEV), energy backup systems, and automotive headlamps. Existing literature predominantly features topologies employing a single-duty ratio. However, this singular approach may not be dependable for operations with high-duty cycles, necessitating the incorporation of additional components to enhance voltage gain. To address this, the current study introduces the concept of time-sharing within the context of a high-gain non-isolated DC-DC converter. This innovative approach achieves substantially higher output voltage gains, approximately 13.33 times that of the input voltage. The analysis of the proposed converter is approached from various perspectives. Finally, it is examined within the MATLAB/Simulink environment, where the theoretical analysis is validated, and an efficiency of 97.4% is achieved.

10.
Sci Rep ; 14(1): 17938, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095433

RESUMO

This article examines the operational functionality of intelligent transport systems to enhance smart cities by reducing traffic congestion. Given the increasing populations of smart cities, there is a growing demand for public transit systems to address the issue of traffic congestion. Therefore, the suggested system is developed using a few parametric design models, which combine point-to-point protocol and mode control optimization. The multi-objective parametric design for a smart transportation system is conducted using min-max functions to minimize the waiting time period for end users. Furthermore, customers are given the option to utilize a line following mechanism that offers suitable connectivity, along with independent identification and revitalize functions. The predicted model effectively eliminates the delay produced by transportation devices when positioning units are involved, ensuring that individual messages are delivered without any interruptions. In order to evaluate the results of the proposed system model, four different scenarios were examined. A comparison analysis revealed that the suggested method achieves a suitable directional flow for 96% of smart transport units. Additionally, it reduces delays and waiting periods by 2% and 6% respectively, while increasing energy consumption by 29%.

11.
Sci Rep ; 14(1): 16805, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39039123

RESUMO

The magnet-less switched reluctance motor (SRM) speed-torque characteristics are ideally suited for traction motor drive characteristics and its advantage to minimize the overall cost of on-road EVs. The main drawbacks are torque and flux ripple, which have produced high in low-speed operation. However, the emerging direct torque control (DTC) operated magnitude flux and torque estimation with voltage vectors (VVs) gives high torque ripples due to the selection of effective switching states and sector partition accuracy. On the other hand, the existing model predictive control (MPC) with multiple objective and optimization weighting factors produces high torque ripples due to the system dynamics and constraints. Therefore, existing DTC and MPC can result in high torque ripples. This paper proposed a finite set (FS)-MPC with a single cost function objective without weighting factor: the predicted torque considered to evaluate VVs to minimize the ripples further. The selected optimal VV minimizes the SRM drive torque and flux ripples in steady and dynamic state behaviour. The classical DTC and proposed model were developed, and simulation results were verified using MATLAB/Simulink. The proposed model operated in SRM drives experimental results to prove the effective minimization of torque and flux ripples compared to the existing DTC.

12.
Sci Rep ; 14(1): 11235, 2024 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-38755202

RESUMO

Skin cancer is one of the most life-threatening diseases caused by the abnormal growth of the skin cells, when exposed to ultraviolet radiation. Early detection seems to be more crucial for reducing aberrant cell proliferation because the mortality rate is rapidly rising. Although multiple researches are available based on the skin cancer detection, there still exists challenges in improving the accuracy, reducing the computational time and so on. In this research, a novel skin cancer detection is performed using a modified falcon finch deep Convolutional neural network classifier (Modified Falcon finch deep CNN) that efficiently detects the disease with higher efficiency. The usage of modified falcon finch deep CNN classifier effectively analyzed the information relevant to the skin cancer and the errors are also minimized. The inclusion of the falcon finch optimization in the deep CNN classifier is necessary for efficient parameter tuning. This tuning enhanced the robustness and boosted the convergence of the classifier that detects the skin cancer in less stipulated time. The modified falcon finch deep CNN classifier achieved accuracy, sensitivity, and specificity values of 93.59%, 92.14%, and 95.22% regarding k-fold and 96.52%, 96.69%, and 96.54% regarding training percentage, proving more effective than literary works.


Assuntos
Redes Neurais de Computação , Neoplasias Cutâneas , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/patologia , Humanos , Tentilhões , Animais , Masculino , Detecção Precoce de Câncer/métodos , Feminino , Sensibilidade e Especificidade
13.
Sci Rep ; 14(1): 10280, 2024 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-38704423

RESUMO

In modern healthcare, integrating Artificial Intelligence (AI) and Internet of Medical Things (IoMT) is highly beneficial and has made it possible to effectively control disease using networks of interconnected sensors worn by individuals. The purpose of this work is to develop an AI-IoMT framework for identifying several of chronic diseases form the patients' medical record. For that, the Deep Auto-Optimized Collaborative Learning (DACL) Model, a brand-new AI-IoMT framework, has been developed for rapid diagnosis of chronic diseases like heart disease, diabetes, and stroke. Then, a Deep Auto-Encoder Model (DAEM) is used in the proposed framework to formulate the imputed and preprocessed data by determining the fields of characteristics or information that are lacking. To speed up classification training and testing, the Golden Flower Search (GFS) approach is then utilized to choose the best features from the imputed data. In addition, the cutting-edge Collaborative Bias Integrated GAN (ColBGaN) model has been created for precisely recognizing and classifying the types of chronic diseases from the medical records of patients. The loss function is optimally estimated during classification using the Water Drop Optimization (WDO) technique, reducing the classifier's error rate. Using some of the well-known benchmarking datasets and performance measures, the proposed DACL's effectiveness and efficiency in identifying diseases is evaluated and compared.


Assuntos
Inteligência Artificial , Internet das Coisas , Humanos , Prognóstico , Aprendizado Profundo , Doença Crônica , Algoritmos
14.
Sci Rep ; 14(1): 7520, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38553492

RESUMO

The consumption of water constitutes the physical health of most of the living species and hence management of its purity and quality is extremely essential as contaminated water has to potential to create adverse health and environmental consequences. This creates the dire necessity to measure, control and monitor the quality of water. The primary contaminant present in water is Total Dissolved Solids (TDS), which is hard to filter out. There are various substances apart from mere solids such as potassium, sodium, chlorides, lead, nitrate, cadmium, arsenic and other pollutants. The proposed work aims to provide the automation of water quality estimation through Artificial Intelligence and uses Explainable Artificial Intelligence (XAI) for the explanation of the most significant parameters contributing towards the potability of water and the estimation of the impurities. XAI has the transparency and justifiability as a white-box model since the Machine Learning (ML) model is black-box and unable to describe the reasoning behind the ML classification. The proposed work uses various ML models such as Logistic Regression, Support Vector Machine (SVM), Gaussian Naive Bayes, Decision Tree (DT) and Random Forest (RF) to classify whether the water is drinkable. The various representations of XAI such as force plot, test patch, summary plot, dependency plot and decision plot generated in SHAPELY explainer explain the significant features, prediction score, feature importance and justification behind the water quality estimation. The RF classifier is selected for the explanation and yields optimum Accuracy and F1-Score of 0.9999, with Precision and Re-call of 0.9997 and 0.998 respectively. Thus, the work is an exploratory analysis of the estimation and management of water quality with indicators associated with their significance. This work is an emerging research at present with a vision of addressing the water quality for the future as well.

15.
Sci Rep ; 14(1): 6795, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38514669

RESUMO

Industrial advancements and utilization of large amount of fossil fuels, vehicle pollution, and other calamities increases the Air Quality Index (AQI) of major cities in a drastic manner. Major cities AQI analysis is essential so that the government can take proper preventive, proactive measures to reduce air pollution. This research incorporates artificial intelligence in AQI prediction based on air pollution data. An optimized machine learning model which combines Grey Wolf Optimization (GWO) with the Decision Tree (DT) algorithm for accurate prediction of AQI in major cities of India. Air quality data available in the Kaggle repository is used for experimentation, and major cities like Delhi, Hyderabad, Kolkata, Bangalore, Visakhapatnam, and Chennai are considered for analysis. The proposed model performance is experimentally verified through metrics like R-Square, RMSE, MSE, MAE, and accuracy. Existing machine learning models, like k-nearest Neighbor, Random Forest regressor, and Support vector regressor, are compared with the proposed model. The proposed model attains better prediction performance compared to traditional machine learning algorithms with maximum accuracy of 88.98% for New Delhi city, 91.49% for Bangalore city, 94.48% for Kolkata, 97.66% for Hyderabad, 95.22% for Chennai and 97.68% for Visakhapatnam city.

16.
Sci Rep ; 14(1): 2820, 2024 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-38307901

RESUMO

This paper proposes and executes an in-depth learning-based image processing approach for self-picking apples. The system includes a lightweight one-step detection network for fruit recognition. As well as computer vision to analyze the point class and anticipate a correct approach position for each fruit before grabbing. Using the raw inputs from a high-resolution camera, fruit recognition and instance segmentation are done on RGB photos. The computer vision classification and grasping systems are integrated and outcomes from tree-grown foods are provided as input information and output methodology poses for every apple and orange to robotic arm execution. Before RGB picture data is acquired from laboratory and plantation environments, the developed vision method will be evaluated. Robot harvest experiment is conducted in indoor as well as outdoor to evaluate the proposed harvesting system's performance. The research findings suggest that the proposed vision technique can control robotic harvesting effectively and precisely where the success rate of identification is increased above 95% in case of post prediction process with reattempts of less than 12%.


Assuntos
Robótica , Frutas , Processamento de Imagem Assistida por Computador , Força da Mão , Visão Ocular
17.
Sci Rep ; 14(1): 4947, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38418484

RESUMO

Internet of Things (IoT) paves the way for the modern smart industrial applications and cities. Trusted Authority acts as a sole control in monitoring and maintaining the communications between the IoT devices and the infrastructure. The communication between the IoT devices happens from one trusted entity of an area to the other by way of generating security certificates. Establishing trust by way of generating security certificates for the IoT devices in a smart city application can be of high cost and expensive. In order to facilitate this, a secure group authentication scheme that creates trust amongst a group of IoT devices owned by several entities has been proposed. The majority of proposed authentication techniques are made for individual device authentication and are also utilized for group authentication; nevertheless, a unique solution for group authentication is the Dickson polynomial based secure group authentication scheme. The secret keys used in our proposed authentication technique are generated using the Dickson polynomial, which enables the group to authenticate without generating an excessive amount of network traffic overhead. IoT devices' group authentication has made use of the Dickson polynomial. Blockchain technology is employed to enable secure, efficient, and fast data transfer among the unique IoT devices of each group deployed at different places. Also, the proposed secure group authentication scheme developed based on Dickson polynomials is resistant to replay, man-in-the-middle, tampering, side channel and signature forgeries, impersonation, and ephemeral key secret leakage attacks. In order to accomplish this, we have implemented a hardware-based physically unclonable function. Implementation has been carried using python language and deployed and tested on Blockchain using Ethereum Goerli's Testnet framework. Performance analysis has been carried out by choosing various benchmarks and found that the proposed framework outperforms its counterparts through various metrics. Different parameters are also utilized to assess the performance of the proposed blockchain framework and shows that it has better performance in terms of computation, communication, storage and latency.

18.
Sci Rep ; 14(1): 843, 2024 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191643

RESUMO

Detection and classification of epileptic seizures from the EEG signals have gained significant attention in recent decades. Among other signals, EEG signals are extensively used by medical experts for diagnosing purposes. So, most of the existing research works developed automated mechanisms for designing an EEG-based epileptic seizure detection system. Machine learning techniques are highly used for reduced time consumption, high accuracy, and optimal performance. Still, it limits by the issues of high complexity in algorithm design, increased error value, and reduced detection efficacy. Thus, the proposed work intends to develop an automated epileptic seizure detection system with an improved performance rate. Here, the Finite Linear Haar wavelet-based Filtering (FLHF) technique is used to filter the input signals and the relevant set of features are extracted from the normalized output with the help of Fractal Dimension (FD) analysis. Then, the Grasshopper Bio-Inspired Swarm Optimization (GBSO) technique is employed to select the optimal features by computing the best fitness value and the Temporal Activation Expansive Neural Network (TAENN) mechanism is used for classifying the EEG signals to determine whether normal or seizure affected. Numerous intelligence algorithms, such as preprocessing, optimization, and classification, are used in the literature to identify epileptic seizures based on EEG signals. The primary issues facing the majority of optimization approaches are reduced convergence rates and higher computational complexity. Furthermore, the problems with machine learning approaches include a significant method complexity, intricate mathematical calculations, and a decreased training speed. Therefore, the goal of the proposed work is to put into practice efficient algorithms for the recognition and categorization of epileptic seizures based on EEG signals. The combined effect of the proposed FLHF, FD, GBSO, and TAENN models might dramatically improve disease detection accuracy while decreasing complexity of system along with time consumption as compared to the prior techniques. By using the proposed methodology, the overall average epileptic seizure detection performance is increased to 99.6% with f-measure of 99% and G-mean of 98.9% values.


Assuntos
Epilepsia , Gafanhotos , Animais , Convulsões/diagnóstico , Epilepsia/diagnóstico , Redes Neurais de Computação , Eletroencefalografia
19.
Sci Rep ; 14(1): 386, 2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-38172185

RESUMO

The Internet of Things (IoT) is extensively used in modern-day life, such as in smart homes, intelligent transportation, etc. However, the present security measures cannot fully protect the IoT due to its vulnerability to malicious assaults. Intrusion detection can protect IoT devices from the most harmful attacks as a security tool. Nevertheless, the time and detection efficiencies of conventional intrusion detection methods need to be more accurate. The main contribution of this paper is to develop a simple as well as intelligent security framework for protecting IoT from cyber-attacks. For this purpose, a combination of Decisive Red Fox (DRF) Optimization and Descriptive Back Propagated Radial Basis Function (DBRF) classification are developed in the proposed work. The novelty of this work is, a recently developed DRF optimization methodology incorporated with the machine learning algorithm is utilized for maximizing the security level of IoT systems. First, the data preprocessing and normalization operations are performed to generate the balanced IoT dataset for improving the detection accuracy of classification. Then, the DRF optimization algorithm is applied to optimally tune the features required for accurate intrusion detection and classification. It also supports increasing the training speed and reducing the error rate of the classifier. Moreover, the DBRF classification model is deployed to categorize the normal and attacking data flows using optimized features. Here, the proposed DRF-DBRF security model's performance is validated and tested using five different and popular IoT benchmarking datasets. Finally, the results are compared with the previous anomaly detection approaches by using various evaluation parameters.

20.
Sci Rep ; 14(1): 2487, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38291130

RESUMO

Pneumonia is a widespread and acute respiratory infection that impacts people of all ages. Early detection and treatment of pneumonia are essential for avoiding complications and enhancing clinical results. We can reduce mortality, improve healthcare efficiency, and contribute to the global battle against a disease that has plagued humanity for centuries by devising and deploying effective detection methods. Detecting pneumonia is not only a medical necessity but also a humanitarian imperative and a technological frontier. Chest X-rays are a frequently used imaging modality for diagnosing pneumonia. This paper examines in detail a cutting-edge method for detecting pneumonia implemented on the Vision Transformer (ViT) architecture on a public dataset of chest X-rays available on Kaggle. To acquire global context and spatial relationships from chest X-ray images, the proposed framework deploys the ViT model, which integrates self-attention mechanisms and transformer architecture. According to our experimentation with the proposed Vision Transformer-based framework, it achieves a higher accuracy of 97.61%, sensitivity of 95%, and specificity of 98% in detecting pneumonia from chest X-rays. The ViT model is preferable for capturing global context, comprehending spatial relationships, and processing images that have different resolutions. The framework establishes its efficacy as a robust pneumonia detection solution by surpassing convolutional neural network (CNN) based architectures.


Assuntos
Pneumonia , Infecções Respiratórias , Humanos , Raios X , Pneumonia/diagnóstico por imagem , Ciências Humanas , Radiografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA