Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
J Environ Manage ; 358: 120756, 2024 May.
Article in English | MEDLINE | ID: mdl-38599080

ABSTRACT

Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.


Subject(s)
Algorithms , Neural Networks, Computer , Water Quality , Machine Learning , Environmental Monitoring/methods , Lakes , Chlorophyll A/analysis , Wavelet Analysis
2.
Sci Rep ; 14(1): 7833, 2024 04 03.
Article in English | MEDLINE | ID: mdl-38570560

ABSTRACT

Heart disease is a major global cause of mortality and a major public health problem for a large number of individuals. A major issue raised by regular clinical data analysis is the recognition of cardiovascular illnesses, including heart attacks and coronary artery disease, even though early identification of heart disease can save many lives. Accurate forecasting and decision assistance may be achieved in an effective manner with machine learning (ML). Big Data, or the vast amounts of data generated by the health sector, may assist models used to make diagnostic choices by revealing hidden information or intricate patterns. This paper uses a hybrid deep learning algorithm to describe a large data analysis and visualization approach for heart disease detection. The proposed approach is intended for use with big data systems, such as Apache Hadoop. An extensive medical data collection is first subjected to an improved k-means clustering (IKC) method to remove outliers, and the remaining class distribution is then balanced using the synthetic minority over-sampling technique (SMOTE). The next step is to forecast the disease using a bio-inspired hybrid mutation-based swarm intelligence (HMSI) with an attention-based gated recurrent unit network (AttGRU) model after recursive feature elimination (RFE) has determined which features are most important. In our implementation, we compare four machine learning algorithms: SAE + ANN (sparse autoencoder + artificial neural network), LR (logistic regression), KNN (K-nearest neighbour), and naïve Bayes. The experiment results indicate that a 95.42% accuracy rate for the hybrid model's suggested heart disease prediction is attained, which effectively outperforms and overcomes the prescribed research gap in mentioned related work.


Subject(s)
Coronary Artery Disease , Deep Learning , Heart Diseases , Humans , Bayes Theorem , Heart Diseases/diagnosis , Heart Diseases/genetics , Coronary Artery Disease/diagnosis , Coronary Artery Disease/genetics , Algorithms , Intelligence
3.
Sci Rep ; 14(1): 6942, 2024 Mar 23.
Article in English | MEDLINE | ID: mdl-38521848

ABSTRACT

Watermarking is one of the crucial techniques in the domain of information security, preventing the exploitation of 3D Mesh models in the era of Internet. In 3D Mesh watermark embedding, moderately perturbing the vertices is commonly required to retain them in certain pre-arranged relationship with their neighboring vertices. This paper proposes a novel watermarking authentication method, called Nearest Centroid Discrete Gaussian and Levenberg-Marquardt (NCDG-LV), for distortion detection and recovery using salient point detection. In this method, the salient points are selected using the Nearest Centroid and Discrete Gaussian Geometric (NC-DGG) salient point detection model. Map segmentation is applied to the 3D Mesh model to segment into distinct sub regions according to the selected salient points. Finally, the watermark is embedded by employing the Multi-function Barycenter into each spatially selected and segmented region. In the extraction process, the embedded 3D Mesh image is extracted from each re-segmented region by means of Levenberg-Marquardt Deep Neural Network Watermark Extraction. In the authentication stage, watermark bits are extracted by analyzing the geometry via Levenberg-Marquardt back-propagation. Based on a performance evaluation, the proposed method exhibits high imperceptibility and tolerance against attacks, such as smoothing, cropping, translation, and rotation. The experimental results further demonstrate that the proposed method is superior in terms of salient point detection time, distortion rate, true positive rate, peak signal to noise ratio, bit error rate, and root mean square error compared to the state-of-the-art methods.

4.
Eur J Intern Med ; 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38458880

ABSTRACT

It is important to determine the risk for admission to the intensive care unit (ICU) in patients with COVID-19 presenting at the emergency department. Using artificial neural networks, we propose a new Data Ensemble Refinement Greedy Algorithm (DERGA) based on 15 easily accessible hematological indices. A database of 1596 patients with COVID-19 was used; it was divided into 1257 training datasets (80 % of the database) for training the algorithms and 339 testing datasets (20 % of the database) to check the reliability of the algorithms. The optimal combination of hematological indicators that gives the best prediction consists of only four hematological indicators as follows: neutrophil-to-lymphocyte ratio (NLR), lactate dehydrogenase, ferritin, and albumin. The best prediction corresponds to a particularly high accuracy of 97.12 %. In conclusion, our novel approach provides a robust model based only on basic hematological parameters for predicting the risk for ICU admission and optimize COVID-19 patient management in the clinical practice.

5.
Sci Rep ; 14(1): 4877, 2024 Feb 28.
Article in English | MEDLINE | ID: mdl-38418500

ABSTRACT

Differential evolution (DE) is a robust optimizer designed for solving complex domain research problems in the computational intelligence community. In the present work, a multi-hybrid DE (MHDE) is proposed for improving the overall working capability of the algorithm without compromising the solution quality. Adaptive parameters, enhanced mutation, enhanced crossover, reducing population, iterative division and Gaussian random sampling are some of the major characteristics of the proposed MHDE algorithm. Firstly, an iterative division for improved exploration and exploitation is used, then an adaptive proportional population size reduction mechanism is followed for reducing the computational complexity. It also incorporated Weibull distribution and Gaussian random sampling to mitigate premature convergence. The proposed framework is validated by using IEEE CEC benchmark suites (CEC 2005, CEC 2014 and CEC 2017). The algorithm is applied to four engineering design problems and for the weight minimization of three frame design problems. Experimental results are analysed and compared with recent hybrid algorithms such as laplacian biogeography based optimization, adaptive differential evolution with archive (JADE), success history based DE, self adaptive DE, LSHADE, MVMO, fractional-order calculus-based flower pollination algorithm, sine cosine crow search algorithm and others. Statistically, the Friedman and Wilcoxon rank sum tests prove that the proposed algorithm fares better than others.

6.
Sci Rep ; 14(1): 4816, 2024 Feb 27.
Article in English | MEDLINE | ID: mdl-38413614

ABSTRACT

Many real-world optimization problems, particularly engineering ones, involve constraints that make finding a feasible solution challenging. Numerous researchers have investigated this challenge for constrained single- and multi-objective optimization problems. In particular, this work extends the boundary update (BU) method proposed by Gandomi and Deb (Comput. Methods Appl. Mech. Eng. 363:112917, 2020) for the constrained optimization problem. BU is an implicit constraint handling technique that aims to cut the infeasible search space over iterations to find the feasible region faster. In doing so, the search space is twisted, which can make the optimization problem more challenging. In response, two switching mechanisms are implemented that transform the landscape along with the variables to the original problem when the feasible region is found. To achieve this objective, two thresholds, representing distinct switching methods, are taken into account. In the first approach, the optimization process transitions to a state without utilizing the BU approach when constraint violations reach zero. In the second method, the optimization process shifts to a BU method-free optimization phase when there is no further change observed in the objective space. To validate, benchmarks and engineering problems are considered to be solved with well-known evolutionary single- and multi-objective optimization algorithms. Herein, the proposed method is benchmarked using with and without BU approaches over the whole search process. The results show that the proposed method can significantly boost the solutions in both convergence speed and finding better solutions for constrained optimization problems.

7.
J Cell Mol Med ; 28(4): e18105, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38339761

ABSTRACT

Complement inhibition has shown promise in various disorders, including COVID-19. A prediction tool including complement genetic variants is vital. This study aims to identify crucial complement-related variants and determine an optimal pattern for accurate disease outcome prediction. Genetic data from 204 COVID-19 patients hospitalized between April 2020 and April 2021 at three referral centres were analysed using an artificial intelligence-based algorithm to predict disease outcome (ICU vs. non-ICU admission). A recently introduced alpha-index identified the 30 most predictive genetic variants. DERGA algorithm, which employs multiple classification algorithms, determined the optimal pattern of these key variants, resulting in 97% accuracy for predicting disease outcome. Individual variations ranged from 40 to 161 variants per patient, with 977 total variants detected. This study demonstrates the utility of alpha-index in ranking a substantial number of genetic variants. This approach enables the implementation of well-established classification algorithms that effectively determine the relevance of genetic variants in predicting outcomes with high accuracy.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , COVID-19/genetics , Artificial Intelligence , Algorithms
8.
Sci Rep ; 14(1): 534, 2024 01 04.
Article in English | MEDLINE | ID: mdl-38177156

ABSTRACT

The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.


Subject(s)
COVID-19 , Humans , X-Rays , Radiography , COVID-19/diagnostic imaging , Neural Networks, Computer , Biomarkers
9.
Sci Rep ; 14(1): 676, 2024 01 05.
Article in English | MEDLINE | ID: mdl-38182607

ABSTRACT

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging , Skin , Area Under Curve , Neural Networks, Computer
10.
iScience ; 27(1): 108709, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38269095

ABSTRACT

The increasing demand for food production due to the growing population is raising the need for more food-productive environments for plants. The genetic behavior of plant traits remains different in different growing environments. However, it is tedious and impossible to look after the individual plant component traits manually. Plant breeders need computer vision-based plant monitoring systems to analyze different plants' productivity and environmental suitability. It leads to performing feasible quantitative analysis, geometric analysis, and yield rate analysis of the plants. Many of the data collection methods have been used by plant breeders according to their needs. In the presented review, most of them are discussed with their corresponding challenges and limitations. Furthermore, the traditional approaches of segmentation and classification of plant phenotyping are also discussed. The data limitation problems and their currently adapted solutions in the computer vision aspect are highlighted, which somehow solve the problem but are not genuine. The available datasets and current issues are enlightened. The presented study covers the plants phenotyping problems, suggested solutions, and current challenges from data collection to classification steps.

11.
BMC Bioinformatics ; 25(1): 33, 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38253993

ABSTRACT

Breast cancer remains a major public health challenge worldwide. The identification of accurate biomarkers is critical for the early detection and effective treatment of breast cancer. This study utilizes an integrative machine learning approach to analyze breast cancer gene expression data for superior biomarker and drug target discovery. Gene expression datasets, obtained from the GEO database, were merged post-preprocessing. From the merged dataset, differential expression analysis between breast cancer and normal samples revealed 164 differentially expressed genes. Meanwhile, a separate gene expression dataset revealed 350 differentially expressed genes. Additionally, the BGWO_SA_Ens algorithm, integrating binary grey wolf optimization and simulated annealing with an ensemble classifier, was employed on gene expression datasets to identify predictive genes including TOP2A, AKR1C3, EZH2, MMP1, EDNRB, S100B, and SPP1. From over 10,000 genes, BGWO_SA_Ens identified 1404 in the merged dataset (F1 score: 0.981, PR-AUC: 0.998, ROC-AUC: 0.995) and 1710 in the GSE45827 dataset (F1 score: 0.965, PR-AUC: 0.986, ROC-AUC: 0.972). The intersection of DEGs and BGWO_SA_Ens selected genes revealed 35 superior genes that were consistently significant across methods. Enrichment analyses uncovered the involvement of these superior genes in key pathways such as AMPK, Adipocytokine, and PPAR signaling. Protein-protein interaction network analysis highlighted subnetworks and central nodes. Finally, a drug-gene interaction investigation revealed connections between superior genes and anticancer drugs. Collectively, the machine learning workflow identified a robust gene signature for breast cancer, illuminated their biological roles, interactions and therapeutic associations, and underscored the potential of computational approaches in biomarker discovery and precision oncology.


Subject(s)
Biomarkers, Tumor , Breast Neoplasms , Humans , Female , Biomarkers, Tumor/genetics , Precision Medicine , Algorithms , Drug Delivery Systems , Breast Neoplasms/drug therapy , Breast Neoplasms/genetics
12.
Sci Rep ; 14(1): 2215, 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38278836

ABSTRACT

Detecting potholes and traffic signs is crucial for driver assistance systems and autonomous vehicles, emphasizing real-time and accurate recognition. In India, approximately 2500 fatalities occur annually due to accidents linked to hidden potholes and overlooked traffic signs. Existing methods often overlook water-filled and illuminated potholes, as well as those shaded by trees. Additionally, they neglect the perspective and illuminated (nighttime) traffic signs. To address these challenges, this study introduces a novel approach employing a cascade classifier along with a vision transformer. A cascade classifier identifies patterns associated with these elements, and Vision Transformers conducts detailed analysis and classification. The proposed approach undergoes training and evaluation on ICTS, GTSRDB, KAGGLE, and CCSAD datasets. Model performance is assessed using precision, recall, and mean Average Precision (mAP) metrics. Compared to state-of-the-art techniques like YOLOv3, YOLOv4, Faster RCNN, and SSD, the method achieves impressive recognition with a mAP of 97.14% for traffic sign detection and 98.27% for pothole detection.

13.
Sci Rep ; 14(1): 1333, 2024 01 16.
Article in English | MEDLINE | ID: mdl-38228772

ABSTRACT

In previous studies, replicated and multiple types of speech data have been used for Parkinson's disease (PD) detection. However, two main problems in these studies are lower PD detection accuracy and inappropriate validation methodologies leading to unreliable results. This study discusses the effects of inappropriate validation methodologies used in previous studies and highlights the use of appropriate alternative validation methods that would ensure generalization. To enhance PD detection accuracy, we propose a two-stage diagnostic system that refines the extracted set of features through [Formula: see text] regularized linear support vector machine and classifies the refined subset of features through a deep neural network. To rigorously evaluate the effectiveness of the proposed diagnostic system, experiments are performed on two different voice recording-based benchmark datasets. For both datasets, the proposed diagnostic system achieves 100% accuracy under leave-one-subject-out (LOSO) cross-validation (CV) and 97.5% accuracy under k-fold CV. The results show that the proposed system outperforms the existing methods regarding PD detection accuracy. The results suggest that the proposed diagnostic system is essential to improving non-invasive diagnostic decision support in PD.


Subject(s)
Parkinson Disease , Voice , Humans , Algorithms , Parkinson Disease/diagnosis , Support Vector Machine , Neural Networks, Computer
14.
Sci Rep ; 13(1): 18335, 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37884584

ABSTRACT

OAuth2.0 is a Single Sign-On approach that helps to authorize users to log into multiple applications without re-entering the credentials. Here, the OAuth service provider controls the central repository where data is stored, which may lead to third-party fraud and identity theft. To circumvent this problem, we need a distributed framework to authenticate and authorize the user without third-party involvement. This paper proposes a distributed authentication and authorization framework using a secret-sharing mechanism that comprises a blockchain-based decentralized identifier and a private distributed storage via an interplanetary file system. We implemented our proposed framework in Hyperledger Fabric (permissioned blockchain) and Ethereum TestNet (permissionless blockchain). Our performance analysis indicates that secret sharing-based authentication takes negligible time for generation and a combination of shares for verification. Moreover, security analysis shows that our model is robust, end-to-end secure, and compliant with the Universal Composability Framework.

15.
Sci Rep ; 13(1): 11052, 2023 Jul 08.
Article in English | MEDLINE | ID: mdl-37422487

ABSTRACT

The considerable improvement of technology produced for various applications has resulted in a growth in data sizes, such as healthcare data, which is renowned for having a large number of variables and data samples. Artificial neural networks (ANN) have demonstrated adaptability and effectiveness in classification, regression, and function approximation tasks. ANN is used extensively in function approximation, prediction, and classification. Irrespective of the task, ANN learns from the data by adjusting the edge weights to minimize the error between the actual and predicted values. Back Propagation is the most frequent learning technique that is used to learn the weights of ANN. However, this approach is prone to the problem of sluggish convergence, which is especially problematic in the case of Big Data. In this paper, we propose a Distributed Genetic Algorithm based ANN Learning Algorithm for addressing challenges associated with ANN learning for Big data. Genetic Algorithm is one of the well-utilized bio-inspired combinatorial optimization methods. Also, it is possible to parallelize it at multiple stages, and this may be done in an extremely effective manner for the distributed learning process. The proposed model is tested with various datasets to evaluate its realizability and efficiency. The results obtained from the experiments show that after a specific volume of data, the proposed learning method outperformed the traditional methods in terms of convergence time and accuracy. The proposed model outperformed the traditional model by almost 80% improvement in computational time.


Subject(s)
Big Data , Neural Networks, Computer , Algorithms
16.
Environ Sci Pollut Res Int ; 30(35): 84110-84125, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37355508

ABSTRACT

Effectual air quality monitoring network (AQMN) design plays a prominent role in environmental engineering. An optimal AQMN design should consider stations' mutual information and system uncertainties for effectiveness. This study develops a novel optimization model using a non-dominated sorting genetic algorithm II (NSGA-II). The Bayesian maximum entropy (BME) method generates potential stations as the input of a framework based on the transinformation entropy (TE) method to maximize the coverage and minimize the probability of selecting stations. Also, the fuzzy degree of membership and the nonlinear interval number programming (NINP) approaches are used to survey the uncertainty of the joint information. To obtain the best Pareto optimal solution of the AQMN characterization, a robust ranking technique, called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) approach, is utilized to select the most appropriate AQMN properties. This methodology is applied to Los Angeles, Long Beach, and Anaheim in California, USA. Results suggest using 4, 4, and 5 stations to monitor CO, NO2, and ozone, respectively; however, implementing this recommendation reduces coverage by 3.75, 3.75, and 3 times for CO, NO2, and ozone, respectively. On the positive side, this substantially decreases TE for CO, NO2, and ozone concentrations by 8.25, 5.86, and 4.75 times, respectively.


Subject(s)
Air Pollution , Ozone , Models, Theoretical , Bayes Theorem , Environmental Monitoring/methods , Entropy , Nitrogen Dioxide/analysis , Air Pollution/analysis , Ozone/analysis
17.
Sci Rep ; 13(1): 8517, 2023 May 25.
Article in English | MEDLINE | ID: mdl-37231039

ABSTRACT

Large-scale solar energy production is still a great deal of obstruction due to the unpredictability of solar power. The intermittent, chaotic, and random quality of solar energy supply has to be dealt with by some comprehensive solar forecasting technologies. Despite forecasting for the long-term, it becomes much more essential to predict short-term forecasts in minutes or even seconds prior. Because key factors such as sudden movement of the clouds, instantaneous deviation of temperature in ambiance, the increased proportion of relative humidity and uncertainty in the wind velocities, haziness, and rains cause the undesired up and down ramping rates, thereby affecting the solar power generation to a greater extent. This paper aims to acknowledge the extended stellar forecasting algorithm using artificial neural network common sensical aspect. Three layered systems have been suggested, consisting of an input layer, hidden layer, and output layer feed-forward in conjunction with back propagation. A prior 5-min te output forecast fed to the input layer to reduce the error has been introduced to have a more precise forecast. Weather remains the most vital input for the ANN type of modeling. The forecasting errors might enhance considerably, thereby affecting the solar power supply relatively due to the variations in the solar irradiations and temperature on any forecasting day. Prior approximation of stellar radiations exhibits a small amount of qualm depending upon climatic conditions such as temperature, shading conditions, soiling effects, relative humidity, etc. All these environmental factors incorporate uncertainty regarding the prediction of the output parameter. In such a case, the approximation of PV output could be much more suitable than direct solar radiation. This paper uses Gradient Descent (GD) and Levenberg Maquarndt Artificial Neural Network (LM-ANN) techniques to apply to data obtained and recorded milliseconds from a 100 W solar panel. The essential purpose of this paper is to establish a time perspective with the greatest deal for the output forecast of small solar power utilities. It has been observed that 5 ms to 12 h time perspective gives the best short- to medium-term prediction for April. A case study has been done in the Peer Panjal region. The data collected for four months with various parameters have been applied randomly as input data using GD and LM type of artificial neural network compared to actual solar energy data. The proposed ANN based algorithm has been used for unswerving petite term forecasting. The model output has been presented in root mean square error and mean absolute percentage error. The results exhibit a improved concurrence between the forecasted and real models. The forecasting of solar energy and load variations assists in fulfilling the cost-effective aspects.

18.
Inform Med Unlocked ; 38: 101235, 2023.
Article in English | MEDLINE | ID: mdl-37033412

ABSTRACT

In this paper, a mathematical model for assessing the impact of COVID-19 on tuberculosis disease is proposed and analysed. There are pieces of evidence that patients with Tuberculosis (TB) have more chances of developing the SARS-CoV-2 infection. The mathematical model is qualitatively and quantitatively analysed by using the theory of stability analysis. The dynamic system shows endemic equilibrium point which is stable when R 0 < 1 and unstable when R 0 > 1 . The global stability of the endemic point is analysed by constructing the Lyapunov function. The dynamic stability also exhibits bifurcation behaviour. The optimal control theory is used to find an optimal solution to the problem in the mathematical model. The sensitivity analysis is performed to clarify the effective parameters which affect the reproduction number the most. Numerical simulation is carried out to assess the effect of various biological parameters in the dynamic of both tuberculosis and COVID-19 classes. Our simulation results show that the COVID-19 and TB infections can be mitigated by controlling the transmission rate γ .

19.
J Environ Manage ; 338: 117842, 2023 Jul 15.
Article in English | MEDLINE | ID: mdl-37004487

ABSTRACT

Groundwater vulnerability mapping is essential in environmental management since there is an increase in contamination caused by excessive population growth. However, to our knowledge, there is rare research dedicated to optimizing the groundwater vulnerability models, considering risk conditions, using a robust multi-objective optimization algorithm coupled with a multi-criteria decision-making model (MCDM). This study filled this knowledge gap by developing an innovative hybrid risk-based multi-objective optimization model using three distinguished models. The first model generated two series of scenarios for rate modifications associated with two common contaminations, Nitrate and Sulfate, based on susceptibility index (SI) and DRASTICA models. The second model was a multi-objective optimization framework using non-dominated sorting genetic algorithms- II and III (NSGA-II and NSGA-III), considering uncertainties in the input rates by the conditional value-at-risk (CVaR) technique. Finally, the third model was a well-known MCDM model, the COmplex PRoportional ASsessment (COPRAS), which identified the best compromise solution among Pareto-optimal solutions for weights of the contaminations. Regarding the Sulfate's results, although the optimized DRASTICA model led to the same correlation as the initial model, 0.7, the optimized SI model increased the correlation to 0.8 compared to the initial model as 0.58. For the Nitrate, both the optimized SI and the optimized DRASTICA models raised the correlation to 0.6 and 0.7 compared to the initial model with a correlation value of 0.36, respectively. Hence, the best and the lowest correlation among the optimized models were between SI and Sulfate concentration and SI and Nitrate concentration, respectively.


Subject(s)
Groundwater , Nitrates , Nitrates/analysis , Algorithms , Uncertainty
20.
J Environ Manage ; 334: 117463, 2023 May 15.
Article in English | MEDLINE | ID: mdl-36801802

ABSTRACT

As a critical element in preserving the health of urban populations, water distribution systems (WDSs) must be ready to implement emergency plans when catastrophic events such as contamination events occur. A risk-based simulation-optimization framework (EPANET-NSGA-III) combined with a decision support model (GMCR) is proposed in this study to determine optimal locations for contaminant flushing hydrants under an array of potentially hazardous scenarios. Risk-based analysis using Conditional Value-at-Risk (CVaR)-based objectives can address uncertainties regarding the mode of WDS contamination, thereby providing a robust plan to minimize the associated risks at a 95% confidence level. Conflict modeling by GMCR achieved an optimal compromise solution within the Pareto front by identifying a final stable consensus among the decision-makers involved. A novel hybrid contamination event grouping-parallel water quality simulation technique was incorporated into the integrated model to reduce model runtime, the main deterrent in optimization-based methods. The nearly 80% reduction in model runtime made the proposed model a viable solution for online simulation-optimization problems. The framework's capacity to address real-world problems was evaluated for the WDS operating in Lamerd, a city in Fars Province, Iran. Results showed that the proposed framework was capable of highlighting a single flushing strategy, which not only optimally reduced risks associated with contamination events, but provided acceptable coverage against such threats, flushing 35-61.3% of input contamination mass on average, and reducing average time-to-return to normal conditions by 14.4-60.2%, while employing less than half of the initial potential hydrants.


Subject(s)
Computer Simulation , Water Pollution , Water Supply , Cities , Water Pollution/prevention & control , Water Quality , Iran , Water Supply/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...