Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Sensors (Basel) ; 23(6)2023 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-36991884

RESUMEN

Terminal neurological conditions can affect millions of people worldwide and hinder them from doing their daily tasks and movements normally. Brain computer interface (BCI) is the best hope for many individuals with motor deficiencies. It will help many patients interact with the outside world and handle their daily tasks without assistance. Therefore, machine learning-based BCI systems have emerged as non-invasive techniques for reading out signals from the brain and interpreting them into commands to help those people to perform diverse limb motor tasks. This paper proposes an innovative and improved machine learning-based BCI system that analyzes EEG signals obtained from motor imagery to distinguish among various limb motor tasks based on BCI competition III dataset IVa. The proposed framework pipeline for EEG signal processing performs the following major steps. The first step uses a meta-heuristic optimization technique, called the whale optimization algorithm (WOA), to select the optimal features for discriminating between neural activity patterns. The pipeline then uses machine learning models such as LDA, k-NN, DT, RF, and LR to analyze the chosen features to enhance the precision of EEG signal analysis. The proposed BCI system, which merges the WOA as a feature selection method and the optimized k-NN classification model, demonstrated an overall accuracy of 98.6%, outperforming other machine learning models and previous techniques on the BCI competition III dataset IVa. Additionally, the EEG feature contribution in the ML classification model is reported using Explainable AI (XAI) tools, which provide insights into the individual contributions of the features in the predictions made by the model. By incorporating XAI techniques, the results of this study offer greater transparency and understanding of the relationship between the EEG features and the model's predictions. The proposed method shows potential levels for better use in controlling diverse limb motor tasks to help people with limb impairments and support them while enhancing their quality of life.


Asunto(s)
Interfaces Cerebro-Computador , Calidad de Vida , Electroencefalografía/métodos , Algoritmos , Aprendizaje Automático
2.
Sensors (Basel) ; 22(11)2022 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-35684871

RESUMEN

Alzheimer's disease (AD) is a chronic disease that affects the elderly. There are many different types of dementia, but Alzheimer's disease is one of the leading causes of death. AD is a chronic brain disorder that leads to problems with language, disorientation, mood swings, bodily functions, memory loss, cognitive decline, mood or personality changes, and ultimately death due to dementia. Unfortunately, no cure has yet been developed for it, and it has no known causes. Clinically, imaging tools can aid in the diagnosis, and deep learning has recently emerged as an important component of these tools. Deep learning requires little or no image preprocessing and can infer an optimal data representation from raw images without prior feature selection. As a result, they produce a more objective and less biased process. The performance of a convolutional neural network (CNN) is primarily affected by the hyperparameters chosen and the dataset used. A deep learning model for classifying Alzheimer's patients has been developed using transfer learning and optimized by Gorilla Troops for early diagnosis. This study proposes the A3C-TL-GTO framework for MRI image classification and AD detection. The A3C-TL-GTO is an empirical quantitative framework for accurate and automatic AD classification, developed and evaluated with the Alzheimer's Dataset (four classes of images) and the Alzheimer's Disease Neuroimaging Initiative (ADNI). The proposed framework reduces the bias and variability of preprocessing steps and hyperparameters optimization to the classifier model and dataset used. Our strategy, evaluated on MRIs, is easily adaptable to other imaging methods. According to our findings, the proposed framework was an excellent instrument for this task, with a significant potential advantage for patient care. The ADNI dataset, an online dataset on Alzheimer's disease, was used to obtain magnetic resonance imaging (MR) brain images. The experimental results demonstrate that the proposed framework achieves 96.65% accuracy for the Alzheimer's Dataset and 96.25% accuracy for the ADNI dataset. Moreover, a better performance in terms of accuracy is demonstrated over other state-of-the-art approaches.


Asunto(s)
Enfermedad de Alzheimer , Anciano , Enfermedad de Alzheimer/diagnóstico por imagen , Humanos , Aprendizaje Automático , Neuroimagen
3.
Sensors (Basel) ; 21(22)2021 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-34833680

RESUMEN

The human brain can effortlessly perform vision processes using the visual system, which helps solve multi-object tracking (MOT) problems. However, few algorithms simulate human strategies for solving MOT. Therefore, devising a method that simulates human activity in vision has become a good choice for improving MOT results, especially occlusion. Eight brain strategies have been studied from a cognitive perspective and imitated to build a novel algorithm. Two of these strategies gave our algorithm novel and outstanding results, rescuing saccades and stimulus attributes. First, rescue saccades were imitated by detecting the occlusion state in each frame, representing the critical situation that the human brain saccades toward. Then, stimulus attributes were mimicked by using semantic attributes to reidentify the person in these occlusion states. Our algorithm favourably performs on the MOT17 dataset compared to state-of-the-art trackers. In addition, we created a new dataset of 40,000 images, 190,000 annotations and 4 classes to train the detection model to detect occlusion and semantic attributes. The experimental results demonstrate that our new dataset achieves an outstanding performance on the scaled YOLOv4 detection model by achieving a 0.89 mAP 0.5.


Asunto(s)
Algoritmos , Semántica , Encéfalo , Humanos , Movimientos Sacádicos
4.
Sensors (Basel) ; 21(13)2021 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-34283139

RESUMEN

There is a crucial need to process patient's data immediately to make a sound decision rapidly; this data has a very large size and excessive features. Recently, many cloud-based IoT healthcare systems are proposed in the literature. However, there are still several challenges associated with the processing time and overall system efficiency concerning big healthcare data. This paper introduces a novel approach for processing healthcare data and predicts useful information with the support of the use of minimum computational cost. The main objective is to accept several types of data and improve accuracy and reduce the processing time. The proposed approach uses a hybrid algorithm which will consist of two phases. The first phase aims to minimize the number of features for big data by using the Whale Optimization Algorithm as a feature selection technique. After that, the second phase performs real-time data classification by using Naïve Bayes Classifier. The proposed approach is based on fog Computing for better business agility, better security, deeper insights with privacy, and reduced operation cost. The experimental results demonstrate that the proposed approach can reduce the number of datasets features, improve the accuracy and reduce the processing time. Accuracy enhanced by average rate: 3.6% (3.34 for Diabetes, 2.94 for Heart disease, 3.77 for Heart attack prediction, and 4.15 for Sonar). Besides, it enhances the processing speed by reducing the processing time by an average rate: 8.7% (28.96 for Diabetes, 1.07 for Heart disease, 3.31 for Heart attack prediction, and 1.4 for Sonar).


Asunto(s)
Algoritmos , Ballenas , Animales , Teorema de Bayes , Macrodatos , Atención a la Salud
5.
Chaos Solitons Fractals ; 138: 110137, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32834583

RESUMEN

Nowadays, a significant number of infectious diseases such as human coronavirus disease (COVID-19) are threatening the world by spreading at an alarming rate. Some of the literatures pointed out that the pandemic is exhibiting seasonal patterns in its spread, incidence and nature of the distribution. In connection to the spread and distribution of the infection, scientific analysis that answers the questions whether the next summer can save people from COVID-19 is required. Many researchers have been exclusively asked whether high temperature during summer can slow down the spread of the COVID-19 as it has with other seasonal flues. Since there are a lot of questions that are unanswered right now, and many mysteries aspects about the COVID-19 that is still unknown to us, in-depth study and analysis of associated weather features are required. Moreover, understanding the nature of COVID-19 and forecasting the spread of COVID-19 request more investigation of the real effect of weather variables on the transmission of the COVID-19 among people. In this work, various regressor machine learning models are proposed to extract the relationship between different factors and the spreading rate of COVID-19. The machine learning algorithms employed in this work estimate the impact of weather variables such as temperature and humidity on the transmission of COVID-19 by extracting the relationship between the number of confirmed cases and the weather variables on certain regions. To validate the proposed method, we have collected the required datasets related to weather and census features and necessary prepossessing is carried out. From the experimental results, it is shown that the weather variables are more relevant in predicting the mortality rate when compared to the other census variables such as population, age, and urbanization. Thus, from this result, we can conclude that temperature and humidity are important features for predicting COVID-19 mortality rate. Moreover, it is indicated that the higher the value of temperature the lower number of infection cases.

6.
Bioengineering (Basel) ; 11(8)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39199780

RESUMEN

The global prevalence of cardiovascular diseases (CVDs) as a leading cause of death highlights the imperative need for refined risk assessment and prognostication methods. The traditional approaches, including the Framingham Risk Score, blood tests, imaging techniques, and clinical assessments, although widely utilized, are hindered by limitations such as a lack of precision, the reliance on static risk variables, and the inability to adapt to new patient data, thereby necessitating the exploration of alternative strategies. In response, this study introduces CardioRiskNet, a hybrid AI-based model designed to transcend these limitations. The proposed CardioRiskNet consists of seven parts: data preprocessing, feature selection and encoding, eXplainable AI (XAI) integration, active learning, attention mechanisms, risk prediction and prognosis, evaluation and validation, and deployment and integration. At first, the patient data are preprocessed by cleaning the data, handling the missing values, applying a normalization process, and extracting the features. Next, the most informative features are selected and the categorical variables are converted into a numerical form. Distinctively, CardioRiskNet employs active learning to iteratively select informative samples, enhancing its learning efficacy, while its attention mechanism dynamically focuses on the relevant features for precise risk prediction. Additionally, the integration of XAI facilitates interpretability and transparency in the decision-making processes. According to the experimental results, CardioRiskNet demonstrates superior performance in terms of accuracy, sensitivity, specificity, and F1-Score, with values of 98.7%, 98.7%, 99%, and 98.7%, respectively. These findings show that CardioRiskNet can accurately assess and prognosticate the CVD risk, demonstrating the power of active learning and AI to surpass the conventional methods. Thus, CardioRiskNet's novel approach and high performance advance the management of CVDs and provide healthcare professionals a powerful tool for patient care.

7.
Biomimetics (Basel) ; 9(6)2024 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-38921244

RESUMEN

The need for non-interactive human recognition systems to ensure safe isolation between users and biometric equipment has been exposed by the COVID-19 pandemic. This study introduces a novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication (MSDCS-PHGA). The proposed MSDCS-PHGA involves segmenting, preprocessing, and resizing silhouette images into three scales. Gait features are extracted from these multi-scale images using custom convolutional layers and fused to form an integrated feature set. This multi-scaled deep convolutional approach demonstrates its efficacy in gait recognition by significantly enhancing accuracy. The proposed convolutional neural network (CNN) architecture is assessed using three benchmark datasets: CASIA, OU-ISIR, and OU-MVLP. Moreover, the proposed model is evaluated against other pre-trained models using key performance metrics such as precision, accuracy, sensitivity, specificity, and training time. The results indicate that the proposed deep CNN model outperforms existing models focused on human gait. Notably, it achieves an accuracy of approximately 99.9% for both the CASIA and OU-ISIR datasets and 99.8% for the OU-MVLP dataset while maintaining a minimal training time of around 3 min.

8.
Heliyon ; 9(11): e21530, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38027906

RESUMEN

Autism Spectrum Disorder (ASD) treatment requires accurate diagnosis and effective rehabilitation. Artificial intelligence (AI) techniques in medical diagnosis and rehabilitation can aid doctors in detecting a wide range of diseases more effectively. Nevertheless, due to its highly heterogeneous symptoms and complicated nature, ASD diagnostics continues to be a challenge for researchers. This study introduces an intelligent system based on the Artificial Gorilla Troops Optimizer (GTO) metaheuristic optimizer to detect ASD using Deep Learning and Machine Learning. Kaggle and UCI ML Repository are the data sources used in this study. The first dataset is the Autistic Children Data Set, which contains 3,374 facial images of children divided into Autistic and Non-Autistic categories. The second dataset is a compilation of data from three numerical repositories: (1) Autism Screening Adults, (2) Autistic Spectrum Disorder Screening Data for Adolescents, and (3) Autistic Spectrum Disorder Screening Data for Children. When it comes to image dataset experiments, the most notable results are (1) a TF learning ratio greater than or equal to 50 is recommended, (2) all models recommend data augmentation, and (3) the DenseNet169 model reports the lowest loss value of 0.512. Concerning the numeric dataset, five experiments recommend standardization and the final five attributes are optional in the classification process. The performance metrics demonstrate the worthiness of the proposed feature selection technique using GTO more than counterparts in the literature review.

9.
Biomimetics (Basel) ; 8(6)2023 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-37887629

RESUMEN

The early detection of oral cancer is pivotal for improving patient survival rates. However, the high cost of manual initial screenings poses a challenge, especially in resource-limited settings. Deep learning offers an enticing solution by enabling automated and cost-effective screening. This study introduces a groundbreaking empirical framework designed to revolutionize the accurate and automatic classification of oral cancer using microscopic histopathology slide images. This innovative system capitalizes on the power of convolutional neural networks (CNNs), strengthened by the synergy of transfer learning (TL), and further fine-tuned using the novel Aquila Optimizer (AO) and Gorilla Troops Optimizer (GTO), two cutting-edge metaheuristic optimization algorithms. This integration is a novel approach, addressing bias and unpredictability issues commonly encountered in the preprocessing and optimization phases. In the experiments, the capabilities of well-established pre-trained TL models, including VGG19, VGG16, MobileNet, MobileNetV3Small, MobileNetV2, MobileNetV3Large, NASNetMobile, and DenseNet201, all initialized with 'ImageNet' weights, were harnessed. The experimental dataset consisted of the Histopathologic Oral Cancer Detection dataset, which includes a 'normal' class with 2494 images and an 'OSCC' (oral squamous cell carcinoma) class with 2698 images. The results reveal a remarkable performance distinction between the AO and GTO, with the AO consistently outperforming the GTO across all models except for the Xception model. The DenseNet201 model stands out as the most accurate, achieving an astounding average accuracy rate of 99.25% with the AO and 97.27% with the GTO. This innovative framework signifies a significant leap forward in automating oral cancer detection, showcasing the tremendous potential of applying optimized deep learning models in the realm of healthcare diagnostics. The integration of the AO and GTO in our CNN-based system not only pushes the boundaries of classification accuracy but also underscores the transformative impact of metaheuristic optimization techniques in the field of medical image analysis.

10.
Front Med (Lausanne) ; 10: 1106717, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37089598

RESUMEN

Renal diseases are common health problems that affect millions of people around the world. Among these diseases, kidney stones, which affect anywhere from 1 to 15% of the global population and thus; considered one of the leading causes of chronic kidney diseases (CKD). In addition to kidney stones, renal cancer is the tenth most prevalent type of cancer, accounting for 2.5% of all cancers. Artificial intelligence (AI) in medical systems can assist radiologists and other healthcare professionals in diagnosing different renal diseases (RD) with high reliability. This study proposes an AI-based transfer learning framework to detect RD at an early stage. The framework presented on CT scans and images from microscopic histopathological examinations will help automatically and accurately classify patients with RD using convolutional neural network (CNN), pre-trained models, and an optimization algorithm on images. This study used the pre-trained CNN models VGG16, VGG19, Xception, DenseNet201, MobileNet, MobileNetV2, MobileNetV3Large, and NASNetMobile. In addition, the Sparrow search algorithm (SpaSA) is used to enhance the pre-trained model's performance using the best configuration. Two datasets were used, the first dataset are four classes: cyst, normal, stone, and tumor. In case of the latter, there are five categories within the second dataset that relate to the severity of the tumor: Grade 0, Grade 1, Grade 2, Grade 3, and Grade 4. DenseNet201 and MobileNet pre-trained models are the best for the four-classes dataset compared to others. Besides, the SGD Nesterov parameters optimizer is recommended by three models, while two models only recommend AdaGrad and AdaMax. Among the pre-trained models for the five-class dataset, DenseNet201 and Xception are the best. Experimental results prove the superiority of the proposed framework over other state-of-the-art classification models. The proposed framework records an accuracy of 99.98% (four classes) and 100% (five classes).

11.
ISA Trans ; 122: 281-293, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33962793

RESUMEN

Shrink and swell is a phenomenon that causes transient variability in water level once boiler load variation occurs. The leading cause of the swell effect is the steam demand changes and the actual arrangement of steam generating tubes in the boiler. Steam bubbles beneath HRSG drum water make the level control very difficult, particularly with significant disturbances in the input heat to HRSG. Plant shutdown may occur in some situations, and combined cycle plant efficiency is diminished. The recently applied control methods in industry are single-element and three-element control with PID controllers, but these methods are not well suited for substantial load changes. The main aim of this paper is to investigate the shrink and swell phenomenon inside HRSG power plants. In addition to the existing PID loops, two different standalone controllers, namely, the FOPID controller and fuzzy controller, are implemented with the HRSG model. Besides, Artificial Bee Colony (ABC) algorithm is used to tune FOPID efficiently. Based on overshoot, rise time, ISE, IAE, ITAE as performance measures, the comparison has been held between the three controllers. Simulations show that how the ABC optimization algorithm is efficient with PID, FOPID. It turns out that the proposed method is capable of improving system responses compared to the conventional optimal controller.

12.
Environ Sci Pollut Res Int ; 29(60): 90632-90655, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35871191

RESUMEN

This research work intends to enhance the stepped double-slope solar still performance through an experimental assessment of combining linen wicks and cobalt oxide nanoparticles to the stepped double-slope solar still to improve the water evaporation and water production. The results illustrated that the cotton wicks and cobalt oxide (Co3O4) nanofluid with 1wt% increased the hourly freshwater output (HP) and instantaneous thermal efficiency (ITE). On the other hand, this study compares four machine learning methods to create a prediction model of tubular solar still performance. The methods developed and compared are support vector regressor (SVR), decision tree regressor, neural network, and deep neural network based on experimental data. This problem is a multi-output prediction problem which is HP and ITE. The prediction performance for the SVR was the lowest, with 70 (ml/m2 h) mean absolute error (MAE) for HP and 4.5% for ITE. Decision tree regressor has a better prediction for HP with 33 (ml/m2 h) MAE and almost the same MAE for ITE. Neural network has a better prediction for HP with 28 (ml/m2 h) MAE and a bit worse prediction for ITE with 5.7%. The best model used the deep neural network with 1.94 (ml/m2 h) MAE for HP and 0.67% MAE for ITE.


Asunto(s)
Redes Neurales de la Computación , Agua
13.
Comput Biol Med ; 144: 105383, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35290811

RESUMEN

Researchers have developed more intelligent, highly responsive, and efficient detection methods owing to the COVID-19 demands for more widespread diagnosis. The work done deals with developing an AI-based framework that can help radiologists and other healthcare professionals diagnose COVID-19 cases at a high level of accuracy. However, in the absence of publicly available CT datasets, the development of such AI tools can prove challenging. Therefore, an algorithm for performing automatic and accurate COVID-19 classification using Convolutional Neural Network (CNN), pre-trained model, and Sparrow search algorithm (SSA) on CT lung images was proposed. The pre-trained CNN models used are SeresNext50, SeresNext101, SeNet154, MobileNet, MobileNetV2, MobileNetV3Small, and MobileNetV3Large. In addition, the SSA will be used to optimize the different CNN and transfer learning(TL) hyperparameters to find the best configuration for the pre-trained model used and enhance its performance. Two datasets are used in the experiments. There are two classes in the first dataset, while three in the second. The authors combined two publicly available COVID-19 datasets as the first dataset, namely the COVID-19 Lung CT Scans and COVID-19 CT Scan Dataset. In total, 14,486 images were included in this study. The authors analyzed the Large COVID-19 CT scan slice dataset in the second dataset, which utilized 17,104 images. Compared to other pre-trained models on both classes datasets, MobileNetV3Large pre-trained is the best model. As far as the three-classes dataset is concerned, a model trained on SeNet154 is the best available. Results show that, when compared to other CNN models like LeNet-5 CNN, COVID faster R-CNN, Light CNN, Fuzzy + CNN, Dynamic CNN, CNN and Optimized CNN, the proposed Framework achieves the best accuracy of 99.74% (two classes) and 98% (three classes).


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , SARS-CoV-2 , Tomografía Computarizada por Rayos X/métodos
14.
PeerJ Comput Sci ; 8: e1070, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36092010

RESUMEN

Many people worldwide suffer from mental illnesses such as major depressive disorder (MDD), which affect their thoughts, behavior, and quality of life. Suicide is regarded as the second leading cause of death among teenagers when treatment is not received. Twitter is a platform for expressing their emotions and thoughts about many subjects. Many studies, including this one, suggest using social media data to track depression and other mental illnesses. Even though Arabic is widely spoken and has a complex syntax, depressive detection methods have not been applied to the language. The Arabic tweets dataset should be scraped and annotated first. Then, a complete framework for categorizing tweet inputs into two classes (such as Normal or Suicide) is suggested in this study. The article also proposes an Arabic tweet preprocessing algorithm that contrasts lemmatization, stemming, and various lexical analysis methods. Experiments are conducted using Twitter data scraped from the Internet. Five different annotators have annotated the data. Performance metrics are reported on the suggested dataset using the latest Bidirectional Encoder Representations from Transformers (BERT) and Universal Sentence Encoder (USE) models. The measured performance metrics are balanced accuracy, specificity, F1-score, IoU, ROC, Youden Index, NPV, and weighted sum metric (WSM). Regarding USE models, the best-weighted sum metric (WSM) is 80.2%, and with regards to Arabic BERT models, the best WSM is 95.26%.

15.
PeerJ Comput Sci ; 8: e1054, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36092017

RESUMEN

Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.

16.
Diagnostics (Basel) ; 11(10)2021 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-34679634

RESUMEN

The growth of abnormal cells in the brain causes human brain tumors. Identifying the type of tumor is crucial for the prognosis and treatment of the patient. Data from cancer microarrays typically include fewer samples with many gene expression levels as features, reflecting the curse of dimensionality and making classifying data from microarrays challenging. In most of the examined studies, cancer classification (Malignant and benign) accuracy was examined without disclosing biological information related to the classification process. A new approach was proposed to bridge the gap between cancer classification and the interpretation of the biological studies of the genes implicated in cancer. This study aims to develop a new hybrid model for cancer classification (by using feature selection mRMRe as a key step to improve the performance of classification methods and a distributed hyperparameter optimization for gradient boosting ensemble methods). To evaluate the proposed method, NB, RF, and SVM classifiers have been chosen. In terms of the AUC, sensitivity, and specificity, the optimized CatBoost classifier performed better than the optimized XGBoost in cross-validation 5, 6, 8, and 10. With an accuracy of 0.91±0.12, the optimized CatBoost classifier is more accurate than the CatBoost classifier without optimization, which is 0.81± 0.24. By using hybrid algorithms, SVM, RF, and NB automatically become more accurate. Furthermore, in terms of accuracy, SVM and RF (0.97±0.08) achieve equivalent and higher classification accuracy than NB (0.91±0.12). The findings of relevant biomedical studies confirm the findings of the selected genes.

17.
Neural Comput Appl ; 33(7): 2929-2948, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33132535

RESUMEN

Globally, many research works are going on to study the infectious nature of COVID-19 and every day we learn something new about it through the flooding of the huge data that are accumulating hourly rather than daily which instantly opens hot research avenues for artificial intelligence researchers. However, the public's concern by now is to find answers for two questions; (1) When this COVID-19 pandemic will be over? and (2) After coming to its end, will COVID-19 return again in what is known as a second rebound of the pandemic? In this work, we developed a predictive model that can estimate the expected period that the virus can be stopped and the risk of the second rebound of COVID-19 pandemic. Therefore, we have considered the SARIMA model to predict the spread of the virus on several selected countries and used it for predicting the COVID-19 pandemic life cycle and its end. The study can be applied to predict the same for other countries as the nature of the virus is the same everywhere. The proposed model investigates the statistical estimation of the slowdown period of the pandemic which is extracted based on the concept of normal distribution. The advantages of this study are that it can help governments to act and make sound decisions and plan for future so that the anxiety of the people can be minimized and prepare the mentality of people for the next phases of the pandemic. Based on the experimental results and simulation, the most striking finding is that the proposed algorithm shows the expected COVID-19 infections for the top countries of the highest number of confirmed cases will be manifested between Dec-2020 and  Apr-2021. Moreover, our study forecasts that there may be a second rebound of the pandemic in a year time if the currently taken precautions are eased completely. We have to consider the uncertain nature of the current COVID-19 pandemic and the growing inter-connected and complex world, that are ultimately demanding flexibility, robustness and resilience to cope with the unexpected future events and scenarios.

18.
ISA Trans ; 99: 252-269, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31733889

RESUMEN

This paper proposes a harmony search (HS) based H-infinity (H∞) control method to promote the conventional droop control method. The proposed method is used to enhance the performance of the voltage/frequency (V/F), controller. It can regulate both voltage and frequency to their rated values while enhancing autonomous microgrid (MG) power quality. The results gained from the proposed controller were compared with the results achieved by using the model predictive control (MPC) technique to show the applicability of the proposed controller. On top of that, a comparison between different controllers presented in this paper is performed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA