Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 86
Filter
1.
Nutrients ; 16(14)2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39064657

ABSTRACT

INTRODUCTION: Type 1 Diabetes (T1D) affects over 9 million worldwide and necessitates meticulous self-management for blood glucose (BG) control. Utilizing BG prediction technology allows for increased BG control and a reduction in the diabetes burden caused by self-management requirements. This paper reviews BG prediction models in T1D, which include nutritional components. METHOD: A systematic search, utilizing the PRISMA guidelines, identified articles focusing on BG prediction algorithms for T1D that incorporate nutritional variables. Eligible studies were screened and analyzed for model type, inclusion of additional aspects in the model, prediction horizon, patient population, inputs, and accuracy. RESULTS: The study categorizes 138 blood glucose prediction models into data-driven (54%), physiological (14%), and hybrid (33%) types. Prediction horizons of ≤30 min are used in 36% of models, 31-60 min in 34%, 61-90 min in 11%, 91-120 min in 10%, and >120 min in 9%. Neural networks are the most used data-driven technique (47%), and simple carbohydrate intake is commonly included in models (data-driven: 72%, physiological: 52%, hybrid: 67%). Real or free-living data are predominantly used (83%). CONCLUSION: The primary goal of blood glucose prediction in T1D is to enable informed decisions and maintain safe BG levels, considering the impact of all nutrients for meal planning and clinical relevance.


Subject(s)
Blood Glucose , Diabetes Mellitus, Type 1 , Diabetes Mellitus, Type 1/blood , Diabetes Mellitus, Type 1/diet therapy , Humans , Blood Glucose/metabolism , Glycemic Control/methods , Algorithms , Neural Networks, Computer , Blood Glucose Self-Monitoring/methods
2.
Metab Eng ; 85: 61-72, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39038602

ABSTRACT

Advances in synthetic biology and artificial intelligence (AI) have provided new opportunities for modern biotechnology. High-performance cell factories, the backbone of industrial biotechnology, are ultimately responsible for determining whether a bio-based product succeeds or fails in the fierce competition with petroleum-based products. To date, one of the greatest challenges in synthetic biology is the creation of high-performance cell factories in a consistent and efficient manner. As so-called white-box models, numerous metabolic network models have been developed and used in computational strain design. Moreover, great progress has been made in AI-powered strain engineering in recent years. Both approaches have advantages and disadvantages. Therefore, the deep integration of AI with metabolic models is crucial for the construction of superior cell factories with higher titres, yields and production rates. The detailed applications of the latest advanced metabolic models and AI in computational strain design are summarized in this review. Additionally, approaches for the deep integration of AI and metabolic models are discussed. It is anticipated that advanced mechanistic metabolic models powered by AI will pave the way for the efficient construction of powerful industrial chassis strains in the coming years.

3.
Entropy (Basel) ; 26(6)2024 May 31.
Article in English | MEDLINE | ID: mdl-38920492

ABSTRACT

Given the rapid advancement of artificial intelligence, understanding the foundations of intelligent behaviour is increasingly important. Active inference, regarded as a general theory of behaviour, offers a principled approach to probing the basis of sophistication in planning and decision-making. This paper examines two decision-making schemes in active inference based on "planning" and "learning from experience". Furthermore, we also introduce a mixed model that navigates the data complexity trade-off between these strategies, leveraging the strengths of both to facilitate balanced decision-making. We evaluate our proposed model in a challenging grid-world scenario that requires adaptability from the agent. Additionally, our model provides the opportunity to analyse the evolution of various parameters, offering valuable insights and contributing to an explainable framework for intelligent decision-making.

4.
Water Sci Technol ; 89(8): 1891-1912, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38678398

ABSTRACT

Urban flooding has made it necessary to gain a better understanding of how well gully pots perform when overwhelmed by solids deposition due to various climatic and anthropogenic variables. This study investigates solids deposition in gully pots through the review of eight models, comprising four deterministic models, two hybrid models, a statistical model, and a conceptual model, representing a wide spectrum of solid depositional processes. Traditional models understand and manage the impact of climatic and anthropogenic variables on solid deposition but they are prone to uncertainties due to inadequate handling of complex and non-linear variables, restricted applicability, inflexibility and data bias. Hybrid models which integrate traditional models with data-driven approaches have proved to improve predictions and guarantee the development of uncertainty-proof models. Despite their effectiveness, hybrid models lack explainability. Hence, this study presents the significance of eXplainable Artificial Intelligence (XAI) tools in addressing the challenges associated with hybrid models. Finally, crossovers between various models and a representative workflow for the approach to solids deposition modelling in gully pots is suggested. The paper concludes that the application of explainable hybrid modeling can serve as a valuable tool for gully pot management as it can address key limitations present in existing models.


Subject(s)
Models, Theoretical , Floods
5.
J Environ Manage ; 358: 120756, 2024 May.
Article in English | MEDLINE | ID: mdl-38599080

ABSTRACT

Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.


Subject(s)
Algorithms , Neural Networks, Computer , Water Quality , Machine Learning , Environmental Monitoring/methods , Lakes , Chlorophyll A/analysis , Wavelet Analysis
6.
Heliyon ; 10(7): e28898, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38596134

ABSTRACT

This study uses operational data from a 180 kWp grid-connected solar PV system to train and compare the performance of single and hybrid machine learning models in predicting solar PV production a day-ahead, a week-ahead, two weeks ahead and one month-ahead. The study also analyses the trend in solar PV production and the effect of temperature on solar PV production. The performance of the models is evaluated using R2 score, mean absolute error and root mean square error. The findings revealed the best-performing model for the day ahead forecast to be Artificial Neural Network. Random Forest gave the best performance for the two-week and a month-ahead forecast, while a hybrid model composed of XGBoost and Random Forest gave the best performance for the week-ahead prediction. The study also observed a downward trend in solar PV production, with an average monthly decline of 244.37 kWh. Further, it was observed that an increase in the module temperature and ambient temperature beyond 47 °C and 25 °C resulted in a decline in solar PV production. The study shows that machine learning models perform differently under different time horizons. Therefore, selecting suitable machine learning models for solar PV forecasts for varying time horizons is extremely necessary.

7.
Accid Anal Prev ; 199: 107517, 2024 May.
Article in English | MEDLINE | ID: mdl-38442633

ABSTRACT

Pedestrians represent a group of vulnerable road users who are at a higher risk of sustaining severe injuries than other road users. As such, proactively assessing pedestrian crash risks is of paramount importance. Recently, extreme value theory models have been employed for proactively assessing crash risks from traffic conflicts, whereby the underpinning of these models are two sampling approaches, namely block maxima and peak over threshold. Earlier studies reported poor accuracy and large uncertainty of these models, which has been largely attributed to limited sample size. Another fundamental reason for such poor performance could be the improper selection of traffic conflict extremes due to the lack of an efficient sampling mechanism. To test this hypothesis and demonstrate the effect of sampling technique on extreme value theory models, this study aims to develop hybrid models whereby unconventional sampling techniques were used to select the extreme vehicle-pedestrian conflicts that were then modelled using extreme value distributions to estimate the crash risk. Unconventional sampling techniques refer to unsupervised machine learning-based anomaly detection techniques. In particular, Isolation forest and minimum covariance determinant techniques were used to identify extreme vehicle-pedestrian conflicts characterised by post encroachment time as the traffic conflict measure. Video data was collected for four weekdays (6 am-6 pm) from three four-legged intersections in Brisbane, Australia and processed using artificial intelligence-based video analytics. Results indicate that mean crash estimates of hybrid models were much closer to observed crashes with narrower confidence intervals as compared with traditional extreme value models. The findings of this study demonstrate the suitability of machine learning-based anomaly detection techniques to augment the performance of existing extreme value models for estimating pedestrian crashes from traffic conflicts. These findings are envisaged to further explore the possibility of utilising more advanced machine learning models for traffic conflict techniques.


Subject(s)
Accidents, Traffic , Pedestrians , Humans , Accidents, Traffic/prevention & control , Artificial Intelligence , Machine Learning , Australia
8.
Artif Intell Med ; 148: 102754, 2024 02.
Article in English | MEDLINE | ID: mdl-38325932

ABSTRACT

Epilepsy is a highly prevalent chronic neurological disorder with great negative impact on patients' daily lives. Despite this there is still no adequate technological support to enable epilepsy detection and continuous outpatient monitoring in everyday life. Hyperdimensional (HD) computing is a promising method for epilepsy detection via wearable devices, characterized by a simpler learning process and lower memory requirements compared to other methods. In this work, we demonstrate additional avenues in which HD computing and the manner in which its models are built and stored can be used to better understand, compare and create more advanced machine learning models for epilepsy detection. These possibilities are not feasible with other state-of-the-art models, such as random forests or neural networks. We compare inter-subject model similarity of different classes (seizure and non-seizure), study the process of creating general models from personal ones, and finally posit a method of combining personal and general models to create hybrid models. This results in an improved epilepsy detection performance. We also tested knowledge transfer between models trained on two different datasets. The attained insights are highly interesting not only from an engineering perspective, to create better models for wearables, but also from a neurological perspective, to better understand individual epilepsy patterns.


Subject(s)
Epilepsy , Wearable Electronic Devices , Humans , Epilepsy/diagnosis , Seizures/diagnosis , Neural Networks, Computer , Machine Learning , Electroencephalography
9.
Econ Hum Biol ; 52: 101336, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38104358

ABSTRACT

The distribution of obesity tends to shift from rich to poor individuals as countries develop, in a process of shifting sociodemographic patterns of obesity that has been called the 'obesity transition'. This change tends to happen with economic development, but little is known about the specific mechanisms that drive the change. We propose that improvements in childhood circumstances with economic development may be one of the drivers of the obesity transition. We explore whether the social gradient in body weight differs by childhood socioeconomic status (SES), proxied by the respondent's mother having Grade 12, using South Africa's nationally representative panel National Income Dynamics Study. In support of our hypothesis, we find that the social gradient in body weight is less positive for adults who had a high childhood SES, and already appears to have reversed among high-SES women who also had a high childhood SES. Upward social mobility over an individual's life course or across a single generation is associated with higher body weight compared to a stable high SES. But a high SES sustained in childhood and adulthood - or across more than one generation - may decrease adult obesity risk, and result in a reversal of the social gradient in body weight. Random effects within-between models show that the social gradient in body weight and its interaction with childhood SES are driven more by differences in income between individuals than by short-run changes in income within individuals, again suggesting that the obesity transition is driven by long-run changes rather than by very short-run changes. Our results are broadly robust to using several alternative measures of body weight, childhood SES and adult SES. Our results are consistent with the hypothesis that widespread improvements in childhood circumstances and nutrition with economic development may contribute to the shift to later stages of the obesity transition.


Subject(s)
Obesity , Social Mobility , Adult , Humans , Female , South Africa/epidemiology , Obesity/epidemiology , Social Class , Body Weight , Socioeconomic Factors
10.
PeerJ Comput Sci ; 9: e1599, 2023.
Article in English | MEDLINE | ID: mdl-38077566

ABSTRACT

Background: Alzheimer's disease (AD) is a disease that manifests itself with a deterioration in all mental activities, daily activities, and behaviors, especially memory, due to the constantly increasing damage to some parts of the brain as people age. Detecting AD at an early stage is a significant challenge. Various diagnostic devices are used to diagnose AD. Magnetic Resonance Images (MRI) devices are widely used to analyze and classify the stages of AD. However, the time-consuming process of recording the affected areas of the brain in the images obtained from these devices is another challenge. Therefore, conventional techniques cannot detect the early stage of AD. Methods: In this study, we proposed a deep learning model supported by a fusion loss model that includes fully connected layers and residual blocks to solve the above-mentioned challenges. The proposed model has been trained and tested on the publicly available T1-weighted MRI-based KAGGLE dataset. Data augmentation techniques were used after various preliminary operations were applied to the data set. Results: The proposed model effectively classified four AD classes in the KAGGLE dataset. The proposed model reached the test accuracy of 0.973 in binary classification and 0.982 in multi-class classification thanks to experimental studies and provided a superior classification performance than other studies in the literature. The proposed method can be used online to detect AD and has the feature of a system that will help doctors in the decision-making process.

11.
Digit Health ; 9: 20552076231204748, 2023.
Article in English | MEDLINE | ID: mdl-37799502

ABSTRACT

Objectives: The rising of new cases and death counts from the mpox virus (MPV) is alarming. In order to mitigate the impact of the MPV it is essential to have information of the virus's future position using more precise time series and stochastic models. In this present study, a hybrid forecasting system has been developed for new cases and death counts for MPV infection using the world daily cumulative confirmed and death series. Methods: The original cumulative series was decomposed into new two subseries, such as a trend component and a stochastic series using the Hodrick-Prescott filter. To assess the efficacy of the proposed models, a comparative analysis with several widely recognized benchmark models, including auto-regressive (AR) model, auto-regressive moving average (ARMA) model, non-parametric auto-regressive (NPAR) model and artificial neural network (ANN), was performed. Results: The introduction of two novel hybrid models, HPF11 and HPF34, which demonstrated superior performance compared to all other models, as evidenced by their remarkable results in key performance indicators such as root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE), is a significant advancement in disease prediction. Conclusion: The new models developed can be implemented in forecasting other diseases in the future. To address the current situation effectively, governments and stakeholders must implement significant changes to ensure strict adherence to standard operating procedures (SOPs) by the public. Given the anticipated continuation of increasing trends in the coming days, these measures are essential for mitigating the impact of the outbreak.

12.
Front Public Health ; 11: 1207624, 2023.
Article in English | MEDLINE | ID: mdl-37808978

ABSTRACT

Malaria is a common and serious disease that primarily affects developing countries and its spread is influenced by a variety of environmental and human behavioral factors; therefore, accurate prevalence prediction has been identified as a critical component of the Global Technical Strategy for Malaria from 2016 to 2030. While traditional differential equation models can perform basic forecasting, supervised machine learning algorithms provide more accurate predictions, as demonstrated by a recent study using an elastic net model (REMPS). Nevertheless, current short-term prediction systems do not achieve the required accuracy levels for routine clinical practice. To improve in this direction, stacked hybrid models have been proposed, in which the outputs of several machine learning models are aggregated by using a meta-learner predictive model. In this paper, we propose an alternative specialist hybrid approach that combines a linear predictive model that specializes in the linear component of the malaria prevalence signal and a recurrent neural network predictive model that specializes in the non-linear residuals of the linear prediction, trained with a novel asymmetric loss. Our findings show that the specialist hybrid approach outperforms the current state-of-the-art stacked models on an open-source dataset containing 22 years of malaria prevalence data from the city of Ibadan in southwest Nigeria. The specialist hybrid approach is a promising alternative to current prediction methods, as well as a tool to improve decision-making and resource allocation for malaria control in high-risk countries.


Subject(s)
Malaria , Neural Networks, Computer , Humans , Prevalence , Nigeria/epidemiology , Algorithms , Malaria/epidemiology
13.
Diagnostics (Basel) ; 13(17)2023 Aug 28.
Article in English | MEDLINE | ID: mdl-37685321

ABSTRACT

Diabetic retinopathy (DR) is a complication of diabetes that damages the delicate blood vessels of the retina and leads to blindness. Ophthalmologists rely on diagnosing the retina by imaging the fundus. The process takes a long time and needs skilled doctors to diagnose and determine the stage of DR. Therefore, automatic techniques using artificial intelligence play an important role in analyzing fundus images for the detection of the stages of DR development. However, diagnosis using artificial intelligence techniques is a difficult task and passes through many stages, and the extraction of representative features is important in reaching satisfactory results. Convolutional Neural Network (CNN) models play an important and distinct role in extracting features with high accuracy. In this study, fundus images were used for the detection of the developmental stages of DR by two proposed methods, each with two systems. The first proposed method uses GoogLeNet with SVM and ResNet-18 with SVM. The second method uses Feed-Forward Neural Networks (FFNN) based on the hybrid features extracted by first using GoogLeNet, Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP); followed by ResNet-18, FCH, GLCM and LBP. All the proposed methods obtained superior results. The FFNN network with hybrid features of ResNet-18, FCH, GLCM, and LBP obtained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.

14.
Entropy (Basel) ; 25(7)2023 Jul 12.
Article in English | MEDLINE | ID: mdl-37509998

ABSTRACT

This paper proposes a novel hybrid car-following model: the physics-informed conditional generative adversarial network (PICGAN), designed to enhance multi-step car-following modeling in mixed traffic flow scenarios. This hybrid model leverages the strengths of both physics-based and deep-learning-based models. By taking advantage of the inherent structure of GAN, the PICGAN eliminates the need for an explicit weighting parameter typically used in the combination of traditional physics-based and data-driven models. The effectiveness of the proposed model is substantiated through case studies using the NGSIM I-80 dataset. These studies demonstrate the model's superior trajectory reproduction, suggesting its potential as a strong contender to replace conventional models in trajectory prediction tasks. Furthermore, the deployment of PICGAN significantly enhances the stability and efficiency in mixed traffic flow environments. Given its reliable and stable results, the PICGAN framework contributes substantially to the development of efficient longitudinal control strategies for connected autonomous vehicles (CAVs) in real-world mixed traffic conditions.

15.
J Neural Eng ; 20(4)2023 07 06.
Article in English | MEDLINE | ID: mdl-37369194

ABSTRACT

Objective.Peripheral nerve interfaces have the potential to restore sensory, motor, and visceral functions. In particular, intraneural interfaces allow targeting deep neural structures with high selectivity, even if their performance strongly depends upon the implantation procedure and the subject's anatomy. Currently, few alternatives exist for the determination of the target subject structural and functional anatomy, and statistical characterizations from cadaveric samples are limited because of their high cost. We propose an optimization workflow that can guide both the pre-surgical planning and the determination of maximally selective multisite stimulation protocols for implants consisting of several intraneural electrodes, and we characterize its performance in silico. We show that the availability of structural and functional information leads to very high performances and allows taking informed decisions on neuroprosthetic design.Approach.We employ hybrid models (HMs) of neuromodulation in conjunction with a machine learning-based surrogate model to determine fiber activation under electrical stimulation, and two steps of optimization through particle swarm optimization to optimize in silico implant geometry, implantation and stimulation protocols using morphological data from the human median nerve at a reduced computational cost.Main results.Our method allows establishing the optimal geometry of multi-electrode transverse intra-fascicular multichannel electrode implants, the optimal number of electrodes to implant, their optimal insertion, and a set of multipolar stimulation protocols that lead in silico to selective activation of all the muscles innervated by the human median nerve.Significance.We show how to use effectively HMs for optimizing personalized neuroprostheses for motor function restoration. We provide in-silico evidences about the potential of multipolar stimulation to increase greatly selectivity. We also show that the knowledge of structural and functional anatomies of the target subject leads to very high selectivity and motivate the development of methods for theirin vivocharacterization.


Subject(s)
Median Nerve , Peripheral Nerves , Humans , Electrodes, Implanted , Electrodes , Peripheral Nerves/physiology , Electric Stimulation/methods , Biophysics
16.
Math Biosci ; 362: 109033, 2023 08.
Article in English | MEDLINE | ID: mdl-37257641

ABSTRACT

We provide a critique of mathematical biology in light of rapid developments in modern machine learning. We argue that out of the three modelling activities - (1) formulating models; (2) analysing models; and (3) fitting or comparing models to data - inherent to mathematical biology, researchers currently focus too much on activity (2) at the cost of (1). This trend, we propose, can be reversed by realising that any given biological phenomenon can be modelled in an infinite number of different ways, through the adoption of a pluralistic approach, where we view a system from multiple, different points of view. We explain this pluralistic approach using fish locomotion as a case study and illustrate some of the pitfalls - universalism, creating models of models, etc. - that hinder mathematical biology. We then ask how we might rediscover a lost art: that of creative mathematical modelling.


Subject(s)
Models, Biological , Models, Theoretical , Animals , Locomotion
17.
Brain Sci ; 13(4)2023 Apr 19.
Article in English | MEDLINE | ID: mdl-37190650

ABSTRACT

The recognition of emotions is one of the most challenging issues in human-computer interaction (HCI). EEG signals are widely adopted as a method for recognizing emotions because of their ease of acquisition, mobility, and convenience. Deep neural networks (DNN) have provided excellent results in emotion recognition studies. Most studies, however, use other methods to extract handcrafted features, such as Pearson correlation coefficient (PCC), Principal Component Analysis, Higuchi Fractal Dimension (HFD), etc., even though DNN is capable of generating meaningful features. Furthermore, most earlier studies largely ignored spatial information between the different channels, focusing mainly on time domain and frequency domain representations. This study utilizes a pre-trained 3D-CNN MobileNet model with transfer learning on the spatio-temporal representation of EEG signals to extract features for emotion recognition. In addition to fully connected layers, hybrid models were explored using other decision layers such as multilayer perceptron (MLP), k-nearest neighbor (KNN), extreme learning machine (ELM), XGBoost (XGB), random forest (RF), and support vector machine (SVM). Additionally, this study investigates the effects of post-processing or filtering output labels. Extensive experiments were conducted on the SJTU Emotion EEG Dataset (SEED) (three classes) and SEED-IV (four classes) datasets, and the results obtained were comparable to the state-of-the-art. Based on the conventional 3D-CNN with ELM classifier, SEED and SEED-IV datasets showed a maximum accuracy of 89.18% and 81.60%, respectively. Post-filtering improved the emotional classification performance in the hybrid 3D-CNN with ELM model for SEED and SEED-IV datasets to 90.85% and 83.71%, respectively. Accordingly, spatial-temporal features extracted from the EEG, along with ensemble classifiers, were found to be the most effective in recognizing emotions compared to state-of-the-art methods.

18.
Diagnostics (Basel) ; 13(6)2023 Mar 15.
Article in English | MEDLINE | ID: mdl-36980429

ABSTRACT

Improving forecasts, particularly the accuracy, efficiency, and precision of time-series forecasts, is becoming critical for authorities to predict, monitor, and prevent the spread of the Coronavirus disease. However, the results obtained from the predictive models are imprecise and inefficient because the dataset contains linear and non-linear patterns, respectively. Linear models such as autoregressive integrated moving average cannot be used effectively to predict complex time series, so nonlinear approaches are better suited for such a purpose. Therefore, to achieve a more accurate and efficient predictive value of COVID-19 that is closer to the true value of COVID-19, a hybrid approach was implemented. Therefore, the objectives of this study are twofold. The first objective is to propose intelligence-based prediction methods to achieve better prediction results called autoregressive integrated moving average-least-squares support vector machine. The second objective is to investigate the performance of these proposed models by comparing them with the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average-support vector machine. Our investigation is based on three COVID-19 real datasets, i.e., daily new cases data, daily new death cases data, and daily new recovered cases data. Then, statistical measures such as mean square error, root mean square error, mean absolute error, and mean absolute percentage error were performed to verify that the proposed models are better than the autoregressive integrated moving average, support vector machine model, least-squares support vector machine, and autoregressive integrated moving average-support vector machine. Empirical results using three recent datasets of known the Coronavirus Disease-19 cases in Malaysia show that the proposed model generates the smallest mean square error, root mean square error, mean absolute error, and mean absolute percentage error values for training and testing datasets compared to the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average-support vector machine models. This means that the predicted value of the proposed model is closer to the true value. These results demonstrate that the proposed model can generate estimates more accurately and efficiently. Compared to the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average-support vector machine models, our proposed models perform much better in terms of percent error reduction for both training and testing all datasets. Therefore, the proposed model is possibly the most efficient and effective way to improve prediction for future pandemic performance with a higher level of accuracy and efficiency.

19.
Heliyon ; 9(2): e13167, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36747538

ABSTRACT

Solar radiation is free, and very useful input for most sectors such as heat, health, tourism, agriculture, and energy production, and it plays a critical role in the sustainability of biological, and chemical processes in nature. In this framework, the knowledge of solar radiation data or estimating it as accurately as possible is vital to get the maximum benefit from the sun. From this point of view, many sectors have revised their future investments/plans to enhance their profit margins for sustainable development according to the knowledge/estimation of solar radiation. This case has noteworthy attracted the attention of researchers for the estimation of solar radiation with low errors. Accordingly, it is noticed that various types of models have been continuously developed in the literature. The present review paper has mainly centered on the solar radiation works estimated by the empirical models, time series, artificial intelligence algorithms, and hybrid models. In general, these models have needed the atmospheric, geographic, climatic, and historical solar radiation data of a given region for the estimation of solar radiation. It is seen from the literature review that each model has its advantages and disadvantages in the estimation of solar radiation, and a model that gives the best results for one region may give the worst results for the other region. Furthermore, it is noticed that an input parameter that strongly improves the performance success of the models for a region may worsen the performance success of another region. In this direction, the estimation of solar radiation has been separately detailed in terms of empirical models, time series, artificial intelligence algorithms, and hybrid algorithms. Accordingly, the research gaps, challenges, and future directions for the estimation of solar radiation have been drawn in the present study. In the results, it is well-observed that the hybrid models have exhibited more accurate and reliable results in most studies due to their ability to merge between different models for the benefit of the advantages of each model, but the empirical models have come to the fore in terms of ease of use, and low computational costs.

20.
Bioresour Technol ; 372: 128625, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36642201

ABSTRACT

Given the potential of machine learning algorithms in revolutionizing the bioengineering field, this paper examined and summarized the literature related to artificial intelligence (AI) in the bioprocessing field. Natural language processing (NLP) was employed to explore the direction of the research domain. All the papers from 2013 to 2022 with specific keywords of bioprocessing using AI were extracted from Scopus and grouped into two five-year periods of 2013-to-2017 and 2018-to-2022, where the past and recent research directions were compared. Based on this procedure, selected sample papers from recent five years were subjected to further review and analysis. The result shows that 50% of the publications in the past five-year focused on topics related to hybrid models, ANN, biopharmaceutical manufacturing, and biorefinery. The summarization and analysis of the outcome indicated that implementing AI could improve the design and process engineering strategies in bioprocessing fields.


Subject(s)
Artificial Intelligence , Big Data , Machine Learning , Algorithms , Natural Language Processing
SELECTION OF CITATIONS
SEARCH DETAIL