Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 20331, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223231

RESUMO

A green building (GB) is a design idea that integrates environmentally conscious technology and sustainable procedures throughout the building's life cycle. However, because different green requirements and performances are integrated into the building design, the GB design procedure typically takes longer than conventional structures. Machine learning (ML) and other advanced artificial intelligence (AI), such as DL techniques, are frequently utilized to assist designers in completing their work more quickly and precisely. Therefore, this study aims to develop a GB design predictive model utilizing ML and DL techniques to optimize resource consumption, improve occupant comfort, and lessen the environmental effect of the built environment of the GB design process. A dataset ASHARE-884 is applied to the suggested models. An Exploratory Data Analysis (EDA) is applied, which involves cleaning, sorting, and converting the category data into numerical values utilizing label encoding. In data preprocessing, the Z-Score normalization technique is applied to normalize the data. After data analysis and preprocessing, preprocessed data is used as input for Machine learning (ML) such as RF, DT, and Extreme GB, and Stacking and Deep Learning (DL) such as GNN, LSTM, and RNN techniques for green building design to enhance environmental sustainability by addressing different criteria of the GB design process. The performance of the proposed models is assessed using different evaluation metrics such as accuracy, precision, recall and F1-score. The experiment results indicate that the proposed GNN and LSTM models function more accurately and efficiently than conventional DL techniques for environmental sustainability in green buildings.

2.
Sci Rep ; 14(1): 21842, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39294219

RESUMO

This study introduces an optimized hybrid deep learning approach that leverages meteorological data to improve short-term wind energy forecasting in desert regions. Over a year, various machine learning and deep learning models have been tested across different wind speed categories, with multiple performance metrics used for evaluation. Hyperparameter optimization for the LSTM and Conv-Dual Attention Long Short-Term Memory (Conv-DA-LSTM) architectures was performed. A comparison of the techniques indicates that the deep learning methods consistently outperform the classical techniques, with Conv-DA-LSTM yielding the best overall performance with a clear margin. This method obtained the lowest error rates (RMSE: 71.866) and the highest level of accuracy (R2: 0.93). The optimization clearly works for higher wind speeds, achieving a remarkable improvement of 22.9%. When we look at the monthly performance, all the months presented at least some level of consistent enhancement (RRMSE reductions from 1.6 to 10.2%). These findings highlight the potential of advanced deep learning techniques in enhancing wind energy forecasting accuracy, particularly in challenging desert environments. The hybrid method developed in this study presents a promising direction for improving renewable energy management. This allows for more efficient resource allocation and improves wind resource predictability.

3.
Sci Rep ; 14(1): 21812, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39294389

RESUMO

The evaluation of slope stability is of crucial importance in geotechnical engineering and has significant implications for infrastructure safety, natural hazard mitigation, and environmental protection. This study aimed to identify the most influential factors affecting slope stability and evaluate the performance of various machine learning models for classifying slope stability. Through correlation analysis and feature importance evaluation using a random forest regressor, cohesion, unit weight, slope height, and friction angle were identified as the most critical parameters influencing slope stability. This research assessed the effectiveness of machine learning techniques combined with modern feature selection algorithms and conventional feature analysis methods. The performance of deep learning models, including recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and generative adversarial networks (GANs), in slope stability classification was evaluated. The GAN model demonstrated superior performance, achieving the highest overall accuracy of 0.913 and the highest area under the ROC curve (AUC) of 0.9285. Integration of the binary bGGO technique for feature selection with the GAN model led to significant improvements in classification performance, with the bGGO-GAN model showing enhanced sensitivity, positive predictive value, negative predictive value, and F1 score compared to the classical GAN model. The bGGO-GAN model achieved 95% accuracy on a substantial dataset of 627 samples, demonstrating competitive performance against other models in the literature while offering strong generalizability. This study highlights the potential of advanced machine learning techniques and feature selection methods for improving slope stability classification and provides valuable insights for geotechnical engineering applications.

4.
Heliyon ; 10(15): e35269, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39170130

RESUMO

Vehicle communication is one of the most vital aspects of modern transportation systems because it enables real-time data transmission between vehicles and infrastructure to improve traffic flow and road safety. The next generation of mobile technology, 5G, was created to address earlier generations' growing need for high data rates and quality of service issues. 5G cellular technology aims to eliminate penetration loss by segregating outside and inside settings and allowing extremely high transmission speeds, achieved by installing hundreds of dispersed antenna arrays using a distributed antenna system (DAS). Huge multiple-input multiple-output (MIMO) systems are accomplished via DASs and huge MIMO systems, where hundreds of dispersed antenna arrays are built. Because deep learning (DL) techniques employ artificial neural networks with at least one hidden layer, they are used in this study for vehicle recognition. They can swiftly process vast quantities of labeled training data to identify features. Therefore, this paper employed the VGG19 DL model through transfer learning to address the task of vehicle detection and obstacle identification. It also proposes a novel horizontal handover prediction method based on channel characteristics. The suggested techniques are designed for heterogeneous networks or horizontal handovers using DL. In the designated surrounding regions of 5G environments, the suggested detection and handover algorithms identified vehicles with a success rate of 97 % and predicted the next station for handover.

5.
Molecules ; 29(8)2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38675617

RESUMO

Nanoemulsions are gaining interest in a variety of products as a means of integrating easily degradable bioactive compounds, preserving them from oxidation, and increasing their bioavailability. However, preparing stable emulsion compositions with the desired characteristics is a difficult task. The aim of this study was to encapsulate the Tinospora cordifolia aqueous extract (TCAE) into a water in oil (W/O) nanoemulsion and identify its critical process and formulation variables, like oil (27-29.4 mL), the surfactant concentration (0.6-3 mL), and sonication amplitude (40% to 100%), using response surface methodology (RSM). The responses of this formulation were studied with an analysis of the particle size (PS), free fatty acids (FFAs), and encapsulation efficiency (EE). In between, we have studied a fishbone diagram that was used to measure risk and preliminary research. The optimized condition for the formation of a stable nanoemulsion using quality by design was surfactant (2.43 mL), oil concentration (27.61 mL), and sonication amplitude (88.6%), providing a PS of 171.62 nm, FFA content of 0.86 meq/kg oil and viscosity of 0.597 Pa.s for the blank sample compared to the enriched TCAE nanoemulsion with a PS of 243.60 nm, FFA content of 0.27 meq/kg oil and viscosity of 0.22 Pa.s. The EE increases with increasing concentrations of TCAE, from 56.88% to 85.45%. The RSM response demonstrated that both composition variables had a considerable impact on the properties of the W/O nanoemulsion. Furthermore, after the storage time, the enriched TCAE nanoemulsion showed better stability over the blank nanoemulsion, specially the FFAs, and the blank increased from 0.142 to 1.22 meq/kg oil, while TCAE showed 0.266 to 0.82 meq/kg.


Assuntos
Emulsões , Tamanho da Partícula , Extratos Vegetais , Tinospora , Água , Emulsões/química , Extratos Vegetais/química , Tinospora/química , Água/química , Sonicação , Nanopartículas/química , Óleos/química , Tensoativos/química
6.
Sci Rep ; 13(1): 21796, 2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-38066104

RESUMO

Vehicular Adhoc Networks (VANETs) is an emerging field that employs a wireless local area network (WLAN) characterized by an ad-hoc topology. Vehicular Ad Hoc Networks (VANETs) comprise diverse entities that are integrated to establish effective communication among themselves and with other associated services. Vehicular Ad Hoc Networks (VANETs) commonly encounter a range of obstacles, such as routing complexities and excessive control overhead. Nevertheless, the majority of these attempts were unsuccessful in delivering an integrated approach to address the challenges related to both routing and minimizing control overheads. The present study introduces an Improved Deep Reinforcement Learning (IDRL) approach for routing, with the aim of reducing the augmented control overhead. The IDRL routing technique that has been proposed aims to optimize the routing path while simultaneously reducing the convergence time in the context of dynamic vehicle density. The IDRL effectively monitors, analyzes, and predicts routing behavior by leveraging transmission capacity and vehicle data. As a result, the reduction of transmission delay is achieved by utilizing adjacent vehicles for the transportation of packets through Vehicle-to-Infrastructure (V2I) communication. The simulation outcomes were executed to assess the resilience and scalability of the model in delivering efficient routing and mitigating the amplified overheads concurrently. The method under consideration demonstrates a high level of efficacy in transmitting messages that are safeguarded through the utilization of vehicle-to-infrastructure (V2I) communication. The simulation results indicate that the IDRL routing approach, as proposed, presents a decrease in latency, an increase in packet delivery ratio, and an improvement in data reliability in comparison to other routing techniques currently available.

7.
Diagnostics (Basel) ; 13(22)2023 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-37998575

RESUMO

The paper focuses on the hepatitis C virus (HCV) infection in Egypt, which has one of the highest rates of HCV in the world. The high prevalence is linked to several factors, including the use of injection drugs, poor sterilization practices in medical facilities, and low public awareness. This paper introduces a hyOPTGB model, which employs an optimized gradient boosting (GB) classifier to predict HCV disease in Egypt. The model's accuracy is enhanced by optimizing hyperparameters with the OPTUNA framework. Min-Max normalization is used as a preprocessing step for scaling the dataset values and using the forward selection (FS) wrapped method to identify essential features. The dataset used in the study contains 1385 instances and 29 features and is available at the UCI machine learning repository. The authors compare the performance of five machine learning models, including decision tree (DT), support vector machine (SVM), dummy classifier (DC), ridge classifier (RC), and bagging classifier (BC), with the hyOPTGB model. The system's efficacy is assessed using various metrics, including accuracy, recall, precision, and F1-score. The hyOPTGB model outperformed the other machine learning models, achieving a 95.3% accuracy rate. The authors also compared the hyOPTGB model against other models proposed by authors who used the same dataset.

8.
Biomimetics (Basel) ; 8(7)2023 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-37999166

RESUMO

This study introduces ETLBOCBL-CNN, an automated approach for optimizing convolutional neural network (CNN) architectures to address classification tasks of varying complexities. ETLBOCBL-CNN employs an effective encoding scheme to optimize network and learning hyperparameters, enabling the discovery of innovative CNN structures. To enhance the search process, it incorporates a competency-based learning concept inspired by mixed-ability classrooms during the teacher phase. This categorizes learners into competency-based groups, guiding each learner's search process by utilizing the knowledge of the predominant peers, the teacher solution, and the population mean. This approach fosters diversity within the population and promotes the discovery of innovative network architectures. During the learner phase, ETLBOCBL-CNN integrates a stochastic peer interaction scheme that encourages collaborative learning among learners, enhancing the optimization of CNN architectures. To preserve valuable network information and promote long-term population quality improvement, ETLBOCBL-CNN introduces a tri-criterion selection scheme that considers fitness, diversity, and learners' improvement rates. The performance of ETLBOCBL-CNN is evaluated on nine different image datasets and compared to state-of-the-art methods. Notably, ELTLBOCBL-CNN achieves outstanding accuracies on various datasets, including MNIST (99.72%), MNIST-RD (96.67%), MNIST-RB (98.28%), MNIST-BI (97.22%), MNST-RD + BI (83.45%), Rectangles (99.99%), Rectangles-I (97.41%), Convex (98.35%), and MNIST-Fashion (93.70%). These results highlight the remarkable classification accuracy of ETLBOCBL-CNN, underscoring its potential for advancing smart device infrastructure development.

9.
Biomimetics (Basel) ; 8(7)2023 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-37999193

RESUMO

The COVID-19 epidemic poses a worldwide threat that transcends provincial, philosophical, spiritual, radical, social, and educational borders. By using a connected network, a healthcare system with the Internet of Things (IoT) functionality can effectively monitor COVID-19 cases. IoT helps a COVID-19 patient recognize symptoms and receive better therapy more quickly. A critical component in measuring, evaluating, and diagnosing the risk of infection is artificial intelligence (AI). It can be used to anticipate cases and forecast the alternate incidences number, retrieved instances, and injuries. In the context of COVID-19, IoT technologies are employed in specific patient monitoring and diagnosing processes to reduce COVID-19 exposure to others. This work uses an Indian dataset to create an enhanced convolutional neural network with a gated recurrent unit (CNN-GRU) model for COVID-19 death prediction via IoT. The data were also subjected to data normalization and data imputation. The 4692 cases and eight characteristics in the dataset were utilized in this research. The performance of the CNN-GRU model for COVID-19 death prediction was assessed using five evaluation metrics, including median absolute error (MedAE), mean absolute error (MAE), root mean squared error (RMSE), mean square error (MSE), and coefficient of determination (R2). ANOVA and Wilcoxon signed-rank tests were used to determine the statistical significance of the presented model. The experimental findings showed that the CNN-GRU model outperformed other models regarding COVID-19 death prediction.

10.
Heliyon ; 9(9): e19809, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37809959

RESUMO

Medical video watermarking is one of the beneficial and efficient tools to prohibit important patients' data from illicit enrollment and redistribution. In this paper, a new blind watermarking scheme has been proposed to improve the confidentiality, integrity, authenticity, and perceptual quality of a medical video with minimum distortion. The proposed scheme is based on 2D-DWT and dual Hessenberg-QR decomposition, where the input medical video is initially processed into frames. Then, the processed frames are transformed into sub-bands using 2D-DWT, followed by applying Hessenberg-QR decomposition on the selected wavelet HL2 sub-band. The watermark is scrambled via Arnold cat map to raise confidentiality and then concealed in the modified selected features. The watermark is extracted in a fully blind mode without referencing the original video, which reduces the extraction time. The proposed scheme maintained a fundamental tradeoff between robustness and visual imperceptibility compared to existing methods against many commonly encountered attacks. The visual imperceptibility has been evaluated using well-known metrics PSNR, SSIM, Q-index, and histogram analysis. The proposed scheme achieves a high PSNR value of (70.6899 dB) with minimal distortion and a high robustness level with an average NC value of (0.9998) and BER value of (0.0023) while conserving a large payload capacity. The obtained results show superior performance over similar video watermarking methods. The limitation of this scheme is the elapsed time during the embedding process since we utilized dual Hessenberg-QR decomposition. One possible solution to reduce time consumption is simple decompositions like bound-constrained SVM or similar decompositions.

11.
Sensors (Basel) ; 23(15)2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37571793

RESUMO

Artificial intelligence (AI) systems are increasingly used in corporate security measures to predict the status of assets and suggest appropriate procedures. These programs are also designed to reduce repair time. One way to create an efficient system is to integrate physical repair agents with a computerized management system to develop an intelligent system. To address this, there is a need for a new technique to assist operators in interacting with a predictive system using natural language. The system also uses double neural network convolutional models to analyze device data. For fault prioritization, a technique utilizing fuzzy logic is presented. This strategy ranks the flaws based on the harm or expense they produce. However, the method's success relies on ongoing improvement in spoken language comprehension through language modification and query processing. To carry out this technique, a conversation-driven design is necessary. This type of learning relies on actual experiences with the assistants to provide efficient learning data for language and interaction models. These models can be trained to have more natural conversations. To improve accuracy, academics should construct and maintain publicly usable training sets to update word vectors. We proposed the model dataset (DS) with the Adam (AD) optimizer, Ridge Regression (RR) and Feature Mapping (FP). Our proposed algorithm has been coined with an appropriate acronym DSADRRFP. The same proposed approach aims to leverage each component's benefits to enhance the predictive model's overall performance and precision. This ensures the model is up-to-date and accurate. In conclusion, an AI system integrated with physical repair agents is a useful tool in corporate security measures. However, it needs to be refined to extract data from the operating system and to interact with users in a natural language. The system also needs to be constantly updated to improve accuracy.

12.
Biomimetics (Basel) ; 8(4)2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37622956

RESUMO

Parkinson's disease (PD) affects a large proportion of elderly people. Symptoms include tremors, slow movement, rigid muscles, and trouble speaking. With the aging of the developed world's population, this number is expected to rise. The early detection of PD and avoiding its severe consequences require a precise and efficient system. Our goal is to create an accurate AI model that can identify PD using human voices. We developed a transformer-based method for detecting PD by retrieving dysphonia measures from a subject's voice recording. It is uncommon to use a neural network (NN)-based solution for tabular vocal characteristics, but it has several advantages over a tree-based approach, including compatibility with continuous learning and the network's potential to be linked with an image/voice encoder for a more accurate multi modal solution, shifting SOTA approach from tree-based to a neural network (NN) is crucial for advancing research in multimodal solutions. Our method outperforms the state of the art (SOTA), namely Gradient-Boosted Decision Trees (GBDTs), by at least 1% AUC, and the precision and recall scores are also improved. We additionally offered an XgBoost-based feature-selection method and a fully connected NN layer technique for including continuous dysphonia measures, in addition to the solution network. We also discussed numerous important discoveries relating to our suggested solution and deep learning (DL) and its application to dysphonia measures, such as how a transformer-based network is more resilient to increased depth compared to a simple MLP network. The performance of the proposed approach and conventional machine learning techniques such as MLP, SVM, and Random Forest (RF) have also been compared. A detailed performance comparison matrix has been added to this article, along with the proposed solution's space and time complexity.

13.
Biomimetics (Basel) ; 8(3)2023 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-37504158

RESUMO

Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.

14.
Biomimetics (Basel) ; 8(3)2023 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-37504202

RESUMO

The virus that causes monkeypox has been observed in Africa for several years, and it has been linked to the development of skin lesions. Public panic and anxiety have resulted from the deadly repercussions of virus infections following the COVID-19 pandemic. Rapid detection approaches are crucial since COVID-19 has reached a pandemic level. This study's overarching goal is to use metaheuristic optimization to boost the performance of feature selection and classification methods to identify skin lesions as indicators of monkeypox in the event of a pandemic. Deep learning and transfer learning approaches are used to extract the necessary features. The GoogLeNet network is the deep learning framework used for feature extraction. In addition, a binary implementation of the dipper throated optimization (DTO) algorithm is used for feature selection. The decision tree classifier is then used to label the selected set of features. The decision tree classifier is optimized using the continuous version of the DTO algorithm to improve the classification accuracy. Various evaluation methods are used to compare and contrast the proposed approach and the other competing methods using the following metrics: accuracy, sensitivity, specificity, p-Value, N-Value, and F1-score. Through feature selection and a decision tree classifier, the following results are achieved using the proposed approach; F1-score of 0.92, sensitivity of 0.95, specificity of 0.61, p-Value of 0.89, and N-Value of 0.79. The overall accuracy of the proposed methodology after optimizing the parameters of the decision tree classifier is 94.35%. Furthermore, the analysis of variation (ANOVA) and Wilcoxon signed rank test have been applied to the results to investigate the statistical distinction between the proposed methodology and the alternatives. This comparison verified the uniqueness and importance of the proposed approach to Monkeypox case detection.

15.
Biomimetics (Basel) ; 8(3)2023 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-37504209

RESUMO

Wind patterns can change due to climate change, causing more storms, hurricanes, and quiet spells. These changes can dramatically affect wind power system performance and predictability. Researchers and practitioners are creating more advanced wind power forecasting algorithms that combine more parameters and data sources. Advanced numerical weather prediction models, machine learning techniques, and real-time meteorological sensor and satellite data are used. This paper proposes a Recurrent Neural Network (RNN) forecasting model incorporating a Dynamic Fitness Al-Biruni Earth Radius (DFBER) algorithm to predict wind power data patterns. The performance of this model is compared with several other popular models, including BER, Jaya Algorithm (JAYA), Fire Hawk Optimizer (FHO), Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO), and Particle Swarm Optimization (PSO)-based models. The evaluation is done using various metrics such as relative root mean squared error (RRMSE), Nash Sutcliffe Efficiency (NSE), mean absolute error (MAE), mean bias error (MBE), Pearson's correlation coefficient (r), coefficient of determination (R2), and determination agreement (WI). According to the evaluation metrics and analysis presented in the study, the proposed RNN-DFBER-based model outperforms the other models considered. This suggests that the RNN model, combined with the DFBER algorithm, predicts wind power data patterns more effectively than the alternative models. To support the findings, visualizations are provided to demonstrate the effectiveness of the RNN-DFBER model. Additionally, statistical analyses, such as the ANOVA test and the Wilcoxon Signed-Rank test, are conducted to assess the significance and reliability of the results.

16.
Sensors (Basel) ; 23(13)2023 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-37447712

RESUMO

BACKGROUND: In our current digital world, smartphones are no longer limited to communication but are used in various real-world applications. In the healthcare industry, smartphones have sensors that can record data about our daily activities. Such data can be used for many healthcare purposes, such as elderly healthcare services, early disease diagnoses, and archiving patient data for further use. However, the data collected from the various sensors involve high dimensional features, which are not equally helpful in human activity recognition (HAR). METHODS: This paper proposes an algorithm for selecting the most relevant subset of features that will contribute efficiently to the HAR process. The proposed method is based on a hybrid version of the recent Coronavirus Disease Optimization Algorithm (COVIDOA) with Simulated Annealing (SA). SA algorithm is merged with COVIDOA to improve its performance and help escape the local optima problem. RESULTS: The UCI-HAR dataset from the UCI machine learning repository assesses the proposed algorithm's performance. A comparison is conducted with seven well-known feature selection algorithms, including the Arithmetic Optimization Algorithm (AOA), Gray Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Reptile Search Algorithm (RSA), Zebra Optimization Algorithm (ZOA), Gradient-Based Optimizer (GBO), Seagull Optimization Algorithm (SOA), and Coyote Optimization Algorithm (COA) regarding fitness, STD, accuracy, size of selected subset, and processing time. CONCLUSIONS: The results proved that the proposed approach outperforms state-of-the-art HAR techniques, achieving an average performance of 97.82% in accuracy and a reduction ratio in feature selection of 52.7%.


Assuntos
Infecções por Coronavirus , Coronavirus , Idoso , Humanos , Infecções por Coronavirus/diagnóstico , Algoritmos , Atividades Humanas , Internet
17.
Heliyon ; 9(7): e17622, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37424589

RESUMO

The Internet of Things (IoT) is a network of smart gadgets that are connected through the Internet, including computers, cameras, smart sensors, and mobile phones. Recent developments in the industrial IoT (IIoT) have enabled a wide range of applications, from small businesses to smart cities, which have become indispensable to many facets of human existence. In a system with a few devices, the short lifespan of conventional batteries, which raises maintenance costs, necessitates more replacements and has a negative environmental impact, does not present a problem. However, in networks with millions or even billions of devices, it poses a serious problem. The rapid expansion of the IoT paradigm is threatened by these battery restrictions, thus academics and businesses are now interested in prolonging the lifespan of IoT devices while retaining optimal performance. Resource management is an important aspect of IIoT because it's scarce and limited. Therefore, this paper proposed an efficient algorithm based on federated learning. Firstly, the optimization problem is decomposed into various sub-problems. Then, the particle swarm optimization algorithm is deployed to solve the energy budget. Finally, a communication resource is optimized by an iterative matching algorithm. Simulation results show that the proposed algorithm has better performance as compared with existing algorithms.

18.
Bioengineering (Basel) ; 10(7)2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37508907

RESUMO

This study aims to develop a predictive model for SARS-CoV-2 using machine-learning techniques and to explore various feature selection methods to enhance the accuracy of predictions. A precise forecast of the SARS-CoV-2 respiratory infections spread can help with efficient planning and resource allocation. The proposed model utilizes stochastic regression to capture the virus transmission's stochastic nature, considering data uncertainties. Feature selection techniques are employed to identify the most relevant and informative features contributing to prediction accuracy. Furthermore, the study explores the use of neighbor embedding and Sammon mapping algorithms to visualize high-dimensional SARS-CoV-2 respiratory infection data in a lower-dimensional space, enabling better interpretation and understanding of the underlying patterns. The application of machine-learning techniques for predicting SARS-CoV-2 respiratory infections, the use of statistical measures in healthcare, including confirmed cases, deaths, and recoveries, and an analysis of country-wise dynamics of the pandemic using machine-learning models are used. Our analysis involves the performance of various algorithms, including neural networks (NN), decision trees (DT), random forests (RF), the Adam optimizer (AD), hyperparameters (HP), stochastic regression (SR), neighbor embedding (NE), and Sammon mapping (SM). A pre-processed and feature-extracted SARS-CoV-2 respiratory infection dataset is combined with ADHPSRNESM to form a new orchestration in the proposed model for a perfect prediction to increase the precision of accuracy. The findings of this research can contribute to public health efforts by enabling policymakers and healthcare professionals to make informed decisions based on accurate predictions, ultimately aiding in managing and controlling the SARS-CoV-2 pandemic.

19.
Biomimetics (Basel) ; 8(2)2023 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-37366836

RESUMO

Metamaterials have unique physical properties. They are made of several elements and are structured in repeating patterns at a smaller wavelength than the phenomena they affect. Metamaterials' exact structure, geometry, size, orientation, and arrangement allow them to manipulate electromagnetic waves by blocking, absorbing, amplifying, or bending them to achieve benefits not possible with ordinary materials. Microwave invisibility cloaks, invisible submarines, revolutionary electronics, microwave components, filters, and antennas with a negative refractive index utilize metamaterials. This paper proposed an improved dipper throated-based ant colony optimization (DTACO) algorithm for forecasting the bandwidth of the metamaterial antenna. The first scenario in the tests covered the feature selection capabilities of the proposed binary DTACO algorithm for the dataset that was being evaluated, and the second scenario illustrated the algorithm's regression skills. Both scenarios are part of the studies. The state-of-the-art algorithms of DTO, ACO, particle swarm optimization (PSO), grey wolf optimizer (GWO), and whale optimization (WOA) were explored and compared to the DTACO algorithm. The basic multilayer perceptron (MLP) regressor model, the support vector regression (SVR) model, and the random forest (RF) regressor model were contrasted with the optimal ensemble DTACO-based model that was proposed. In order to assess the consistency of the DTACO-based model that was developed, the statistical research made use of Wilcoxon's rank-sum and ANOVA tests.

20.
Diagnostics (Basel) ; 13(12)2023 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-37370932

RESUMO

INTRODUCTION: In public health, machine learning algorithms have been used to predict or diagnose chronic epidemiological disorders such as diabetes mellitus, which has reached epidemic proportions due to its widespread occurrence around the world. Diabetes is just one of several diseases for which machine learning techniques can be used in the diagnosis, prognosis, and assessment procedures. METHODOLOGY: In this paper, we propose a new approach for boosting the classification of diabetes based on a new metaheuristic optimization algorithm. The proposed approach proposes a new feature selection algorithm based on a dynamic Al-Biruni earth radius and dipper-throated optimization algorithm (DBERDTO). The selected features are then classified using a random forest classifier with its parameters optimized using the proposed DBERDTO. RESULTS: The proposed methodology is evaluated and compared with recent optimization methods and machine learning models to prove its efficiency and superiority. The overall accuracy of diabetes classification achieved by the proposed approach is 98.6%. On the other hand, statistical tests have been conducted to assess the significance and the statistical difference of the proposed approach based on the analysis of variance (ANOVA) and Wilcoxon signed-rank tests. CONCLUSIONS: The results of these tests confirmed the superiority of the proposed approach compared to the other classification and optimization methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA