RESUMO
BACKGROUND AND AIM: due to the rapid growth of data communication and multimedia system applications, security becomes a critical issue in the communication and storage of images. This study aims to improve encryption and decryption for various types of images by decreasing time consumption and strengthening security. METHODOLOGY: An algorithm is proposed for encrypting images based on the Carlisle Adams and Stafford Tavares CAST block cipher algorithm with 3D and 2D logistic maps. A chaotic function that increases the randomness in the encrypted data and images, thereby breaking the relation sequence through the encryption procedure, is introduced. The time is decreased by using three secure and private S-Boxes rather than using six S-Boxes, as in the traditional method. Moreover, the CAST encryption algorithm was modified to be used on the private keys and substitution stage (S-Boxes), with the keys and S-Boxes of the encryption algorithm being generated according to the 2D and 3D chaotic map functions. The proposed system passed all evaluation criteria, including (MSE, PSNR, EQ, MD, SC, NC, AD, SNR, SIM, MAE, Time, CC, Entropy, and histograms). RESULTS: Moreover, the results also illustrate that the created S-Boxes passed all evaluation criteria; compared with the results of the traditional method that was used in creating S-Box, the proposed method achieved better results than other methods used in the other works. The proposed solution improves the entropy which is between (7.991-7.999), reduces the processing time which is between (0.5-11 s/Images), and improves NCPR, which is between (0.991-1). CONCLUSIONS: The proposed solution focuses on reducing the total processing time for encryption and decryption and improving transmission security. Finally, this solution provides a fast security system for surgical telepresence with secure real-time communication. The complexity of this work needs to know the S-Box creation method used, the chaotic method, the values of the chaotic parameters, and which of these methods was used in the encryption process.
Assuntos
Algoritmos , Segurança Computacional , EntropiaRESUMO
Technology for anticipating wind speed can improve the safety and stability of power networks with heavy wind penetration. Due to the unpredictability and instability of the wind, it is challenging to accurately forecast wind power and speed. Several approaches have been developed to improve this accuracy based on processing time series data. This work proposes a method for predicting wind speed with high accuracy based on a novel weighted ensemble model. The weight values in the proposed model are optimized using an adaptive dynamic grey wolf-dipper throated optimization (ADGWDTO) algorithm. The original GWO algorithm is redesigned to emulate the dynamic group-based cooperative to address the difficulty of establishing the balance between exploration and exploitation. Quick bowing movements and a white breast, which distinguish the dipper throated birds hunting method, are employed to improve the proposed algorithm exploration capability. The proposed ADGWDTO algorithm optimizes the hyperparameters of the multi-layer perceptron (MLP), K-nearest regressor (KNR), and Long Short-Term Memory (LSTM) regression models. A dataset from Kaggle entitled Global Energy Forecasting Competition 2012 is employed to assess the proposed algorithm. The findings confirm that the proposed ADGWDTO algorithm outperforms the literature's state-of-the-art wind speed forecasting algorithms. The proposed binary ADGWDTO algorithm achieved average fitness of 0.9209 with a standard deviation fitness of 0.7432 for feature selection, and the proposed weighted optimized ensemble model (Ensemble using ADGWDTO) achieved a root mean square error of 0.0035 compared to state-of-the-art algorithms. The proposed algorithm's stability and robustness are confirmed by statistical analysis of several tests, such as one-way analysis of variance (ANOVA) and Wilcoxon's rank-sum.
Assuntos
Heurística , Vento , Algoritmos , Redes Neurais de Computação , PrevisõesRESUMO
Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.
RESUMO
The virus that causes monkeypox has been observed in Africa for several years, and it has been linked to the development of skin lesions. Public panic and anxiety have resulted from the deadly repercussions of virus infections following the COVID-19 pandemic. Rapid detection approaches are crucial since COVID-19 has reached a pandemic level. This study's overarching goal is to use metaheuristic optimization to boost the performance of feature selection and classification methods to identify skin lesions as indicators of monkeypox in the event of a pandemic. Deep learning and transfer learning approaches are used to extract the necessary features. The GoogLeNet network is the deep learning framework used for feature extraction. In addition, a binary implementation of the dipper throated optimization (DTO) algorithm is used for feature selection. The decision tree classifier is then used to label the selected set of features. The decision tree classifier is optimized using the continuous version of the DTO algorithm to improve the classification accuracy. Various evaluation methods are used to compare and contrast the proposed approach and the other competing methods using the following metrics: accuracy, sensitivity, specificity, p-Value, N-Value, and F1-score. Through feature selection and a decision tree classifier, the following results are achieved using the proposed approach; F1-score of 0.92, sensitivity of 0.95, specificity of 0.61, p-Value of 0.89, and N-Value of 0.79. The overall accuracy of the proposed methodology after optimizing the parameters of the decision tree classifier is 94.35%. Furthermore, the analysis of variation (ANOVA) and Wilcoxon signed rank test have been applied to the results to investigate the statistical distinction between the proposed methodology and the alternatives. This comparison verified the uniqueness and importance of the proposed approach to Monkeypox case detection.
RESUMO
Vehicular Adhoc Networks (VANETs) is an emerging field that employs a wireless local area network (WLAN) characterized by an ad-hoc topology. Vehicular Ad Hoc Networks (VANETs) comprise diverse entities that are integrated to establish effective communication among themselves and with other associated services. Vehicular Ad Hoc Networks (VANETs) commonly encounter a range of obstacles, such as routing complexities and excessive control overhead. Nevertheless, the majority of these attempts were unsuccessful in delivering an integrated approach to address the challenges related to both routing and minimizing control overheads. The present study introduces an Improved Deep Reinforcement Learning (IDRL) approach for routing, with the aim of reducing the augmented control overhead. The IDRL routing technique that has been proposed aims to optimize the routing path while simultaneously reducing the convergence time in the context of dynamic vehicle density. The IDRL effectively monitors, analyzes, and predicts routing behavior by leveraging transmission capacity and vehicle data. As a result, the reduction of transmission delay is achieved by utilizing adjacent vehicles for the transportation of packets through Vehicle-to-Infrastructure (V2I) communication. The simulation outcomes were executed to assess the resilience and scalability of the model in delivering efficient routing and mitigating the amplified overheads concurrently. The method under consideration demonstrates a high level of efficacy in transmitting messages that are safeguarded through the utilization of vehicle-to-infrastructure (V2I) communication. The simulation results indicate that the IDRL routing approach, as proposed, presents a decrease in latency, an increase in packet delivery ratio, and an improvement in data reliability in comparison to other routing techniques currently available.
RESUMO
Metamaterials have unique physical properties. They are made of several elements and are structured in repeating patterns at a smaller wavelength than the phenomena they affect. Metamaterials' exact structure, geometry, size, orientation, and arrangement allow them to manipulate electromagnetic waves by blocking, absorbing, amplifying, or bending them to achieve benefits not possible with ordinary materials. Microwave invisibility cloaks, invisible submarines, revolutionary electronics, microwave components, filters, and antennas with a negative refractive index utilize metamaterials. This paper proposed an improved dipper throated-based ant colony optimization (DTACO) algorithm for forecasting the bandwidth of the metamaterial antenna. The first scenario in the tests covered the feature selection capabilities of the proposed binary DTACO algorithm for the dataset that was being evaluated, and the second scenario illustrated the algorithm's regression skills. Both scenarios are part of the studies. The state-of-the-art algorithms of DTO, ACO, particle swarm optimization (PSO), grey wolf optimizer (GWO), and whale optimization (WOA) were explored and compared to the DTACO algorithm. The basic multilayer perceptron (MLP) regressor model, the support vector regression (SVR) model, and the random forest (RF) regressor model were contrasted with the optimal ensemble DTACO-based model that was proposed. In order to assess the consistency of the DTACO-based model that was developed, the statistical research made use of Wilcoxon's rank-sum and ANOVA tests.
RESUMO
INTRODUCTION: In public health, machine learning algorithms have been used to predict or diagnose chronic epidemiological disorders such as diabetes mellitus, which has reached epidemic proportions due to its widespread occurrence around the world. Diabetes is just one of several diseases for which machine learning techniques can be used in the diagnosis, prognosis, and assessment procedures. METHODOLOGY: In this paper, we propose a new approach for boosting the classification of diabetes based on a new metaheuristic optimization algorithm. The proposed approach proposes a new feature selection algorithm based on a dynamic Al-Biruni earth radius and dipper-throated optimization algorithm (DBERDTO). The selected features are then classified using a random forest classifier with its parameters optimized using the proposed DBERDTO. RESULTS: The proposed methodology is evaluated and compared with recent optimization methods and machine learning models to prove its efficiency and superiority. The overall accuracy of diabetes classification achieved by the proposed approach is 98.6%. On the other hand, statistical tests have been conducted to assess the significance and the statistical difference of the proposed approach based on the analysis of variance (ANOVA) and Wilcoxon signed-rank tests. CONCLUSIONS: The results of these tests confirmed the superiority of the proposed approach compared to the other classification and optimization methods.
RESUMO
The paper focuses on the hepatitis C virus (HCV) infection in Egypt, which has one of the highest rates of HCV in the world. The high prevalence is linked to several factors, including the use of injection drugs, poor sterilization practices in medical facilities, and low public awareness. This paper introduces a hyOPTGB model, which employs an optimized gradient boosting (GB) classifier to predict HCV disease in Egypt. The model's accuracy is enhanced by optimizing hyperparameters with the OPTUNA framework. Min-Max normalization is used as a preprocessing step for scaling the dataset values and using the forward selection (FS) wrapped method to identify essential features. The dataset used in the study contains 1385 instances and 29 features and is available at the UCI machine learning repository. The authors compare the performance of five machine learning models, including decision tree (DT), support vector machine (SVM), dummy classifier (DC), ridge classifier (RC), and bagging classifier (BC), with the hyOPTGB model. The system's efficacy is assessed using various metrics, including accuracy, recall, precision, and F1-score. The hyOPTGB model outperformed the other machine learning models, achieving a 95.3% accuracy rate. The authors also compared the hyOPTGB model against other models proposed by authors who used the same dataset.
RESUMO
The COVID-19 epidemic poses a worldwide threat that transcends provincial, philosophical, spiritual, radical, social, and educational borders. By using a connected network, a healthcare system with the Internet of Things (IoT) functionality can effectively monitor COVID-19 cases. IoT helps a COVID-19 patient recognize symptoms and receive better therapy more quickly. A critical component in measuring, evaluating, and diagnosing the risk of infection is artificial intelligence (AI). It can be used to anticipate cases and forecast the alternate incidences number, retrieved instances, and injuries. In the context of COVID-19, IoT technologies are employed in specific patient monitoring and diagnosing processes to reduce COVID-19 exposure to others. This work uses an Indian dataset to create an enhanced convolutional neural network with a gated recurrent unit (CNN-GRU) model for COVID-19 death prediction via IoT. The data were also subjected to data normalization and data imputation. The 4692 cases and eight characteristics in the dataset were utilized in this research. The performance of the CNN-GRU model for COVID-19 death prediction was assessed using five evaluation metrics, including median absolute error (MedAE), mean absolute error (MAE), root mean squared error (RMSE), mean square error (MSE), and coefficient of determination (R2). ANOVA and Wilcoxon signed-rank tests were used to determine the statistical significance of the presented model. The experimental findings showed that the CNN-GRU model outperformed other models regarding COVID-19 death prediction.
RESUMO
Diagnosing a brain tumor takes a long time and relies heavily on the radiologist's abilities and experience. The amount of data that must be handled has increased dramatically as the number of patients has increased, making old procedures both costly and ineffective. Many researchers investigated a variety of algorithms for detecting and classifying brain tumors that were both accurate and fast. Deep Learning (DL) approaches have recently been popular in developing automated systems capable of accurately diagnosing or segmenting brain tumors in less time. DL enables a pre-trained Convolutional Neural Network (CNN) model for medical images, specifically for classifying brain cancers. The proposed Brain Tumor Classification Model based on CNN (BCM-CNN) is a CNN hyperparameters optimization using an adaptive dynamic sine-cosine fitness grey wolf optimizer (ADSCFGWO) algorithm. There is an optimization of hyperparameters followed by a training model built with Inception-ResnetV2. The model employs commonly used pre-trained models (Inception-ResnetV2) to improve brain tumor diagnosis, and its output is a binary 0 or 1 (0: Normal, 1: Tumor). There are primarily two types of hyperparameters: (i) hyperparameters that determine the underlying network structure; (ii) a hyperparameter that is responsible for training the network. The ADSCFGWO algorithm draws from both the sine cosine and grey wolf algorithms in an adaptable framework that uses both algorithms' strengths. The experimental results show that the BCM-CNN as a classifier achieved the best results due to the enhancement of the CNN's performance by the CNN optimization's hyperparameters. The BCM-CNN has achieved 99.98% accuracy with the BRaTS 2021 Task 1 dataset.
RESUMO
Evapotranspiration is an important quantity required in many applications, such as hydrology and agricultural and irrigation planning. Reference evapotranspiration is particularly important, and the prediction of its variations is beneficial for analyzing the needs and management of water resources. In this paper, we explore the predictive ability of hybrid ensemble learning to predict daily reference evapotranspiration (RET) under the semi-arid climate by using meteorological datasets at 12 locations in the Andalusia province in southern Spain. The datasets comprise mean, maximum, and minimum air temperatures and mean relative humidity and mean wind speed. A new modified variant of the grey wolf optimizer, named the PRSFGWO algorithm, is proposed to maximize the ensemble learning's prediction accuracy through optimal weight tuning and evaluate the proposed model's capacity when the climate data is limited. The performance of the proposed approach, based on weighted ensemble learning, is compared with various algorithms commonly adopted in relevant studies. A diverse set of statistical measurements alongside ANOVA tests was used to evaluate the predictive performance of the prediction models. The proposed model showed high-accuracy statistics, with relative root mean errors lower than 0.999% and a minimum R2 of 0.99. The model inputs were also reduced from six variables to only two for cost-effective predictions of daily RET. This shows that the PRSFGWO algorithm is a good RET prediction model for the semi-arid climate region in southern Spain. The results obtained from this research are very promising compared with existing models in the literature.
Assuntos
Clima Desértico , Vento , Recursos Hídricos , Hidrologia , Aprendizado de MáquinaRESUMO
Human skin diseases have become increasingly prevalent in recent decades, with millions of individuals in developed countries experiencing monkeypox. Such conditions often carry less obvious but no less devastating risks, including increased vulnerability to monkeypox, cancer, and low self-esteem. Due to the low visual resolution of monkeypox disease images, medical specialists with high-level tools are typically required for a proper diagnosis. The manual diagnosis of monkeypox disease is subjective, time-consuming, and labor-intensive. Therefore, it is necessary to create a computer-aided approach for the automated diagnosis of monkeypox disease. Most research articles on monkeypox disease relied on convolutional neural networks (CNNs) and using classical loss functions, allowing them to pick up discriminative elements in monkeypox images. To enhance this, a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine-tune the deep CNN layers for classifying monkeypox disease from images. As a first step in the proposed approach, we use deep CNN-based models to learn the embedding of input images in Euclidean space. In the second step, we use an optimized classification model based on the triplet loss function to calculate the distance between pairs of images in Euclidean space and learn features that may be used to distinguish between different cases, including monkeypox cases. The proposed approach uses images of human skin diseases obtained from an African hospital. The experimental results of the study demonstrate the proposed framework's efficacy, as it outperforms numerous examples of prior research on skin disease problems. On the other hand, statistical experiments with Wilcoxon and analysis of variance (ANOVA) tests are conducted to evaluate the proposed approach in terms of effectiveness and stability. The recorded results confirm the superiority of the proposed method when compared with other optimization algorithms and machine learning models.
RESUMO
The chest X-ray is considered a significant clinical utility for basic examination and diagnosis. The human lung area can be affected by various infections, such as bacteria and viruses, leading to pneumonia. Efficient and reliable classification method facilities the diagnosis of such infections. Deep transfer learning has been introduced for pneumonia detection from chest X-rays in different models. However, there is still a need for further improvements in the feature extraction and advanced classification stages. This paper proposes a classification method with two stages to classify different cases from the chest X-ray images based on a proposed Advanced Squirrel Search Optimization Algorithm (ASSOA). The first stage is the feature learning and extraction processes based on a Convolutional Neural Network (CNN) model named ResNet-50 with image augmentation and dropout processes. The ASSOA algorithm is then applied to the extracted features for the feature selection process. Finally, the Multi-layer Perceptron (MLP) Neural Network's connection weights are optimized by the proposed ASSOA algorithm (using the selected features) to classify input cases. A Kaggle chest X-ray images (Pneumonia) dataset consists of 5,863 X-rays is employed in the experiments. The proposed ASSOA algorithm is compared with the basic Squirrel Search (SS) optimization algorithm, Grey Wolf Optimizer (GWO), and Genetic Algorithm (GA) for feature selection to validate its efficiency. The proposed (ASSOA + MLP) is also compared with other classifiers, based on (SS + MLP), (GWO + MLP), and (GA + MLP), in performance metrics. The proposed (ASSOA + MLP) algorithm achieved a classification mean accuracy of (99.26%). The ASSOA + MLP algorithm also achieved a classification mean accuracy of (99.7%) for a chest X-ray COVID-19 dataset tested from GitHub. The results and statistical tests demonstrate the high effectiveness of the proposed method in determining the infected cases.
RESUMO
Diagnosis is a critical preventive step in Coronavirus research which has similar manifestations with other types of pneumonia. CT scans and X-rays play an important role in that direction. However, processing chest CT images and using them to accurately diagnose COVID-19 is a computationally expensive task. Machine Learning techniques have the potential to overcome this challenge. This article proposes two optimization algorithms for feature selection and classification of COVID-19. The proposed framework has three cascaded phases. Firstly, the features are extracted from the CT scans using a Convolutional Neural Network (CNN) named AlexNet. Secondly, a proposed features selection algorithm, Guided Whale Optimization Algorithm (Guided WOA) based on Stochastic Fractal Search (SFS), is then applied followed by balancing the selected features. Finally, a proposed voting classifier, Guided WOA based on Particle Swarm Optimization (PSO), aggregates different classifiers' predictions to choose the most voted class. This increases the chance that individual classifiers, e.g. Support Vector Machine (SVM), Neural Networks (NN), k-Nearest Neighbor (KNN), and Decision Trees (DT), to show significant discrepancies. Two datasets are used to test the proposed model: CT images containing clinical findings of positive COVID-19 and CT images negative COVID-19. The proposed feature selection algorithm (SFS-Guided WOA) is compared with other optimization algorithms widely used in recent literature to validate its efficiency. The proposed voting classifier (PSO-Guided-WOA) achieved AUC (area under the curve) of 0.995 that is superior to other voting classifiers in terms of performance metrics. Wilcoxon rank-sum, ANOVA, and T-test statistical tests are applied to statistically assess the quality of the proposed algorithms as well.