Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 228
Filter
1.
Article in English | MEDLINE | ID: mdl-39086252

ABSTRACT

Estimation of mental workload from electroencephalogram (EEG) signals aims to accurately measure the cognitive demands placed on an individual during multitasking mental activities. By analyzing the brain activity of the subject, we can determine the level of mental effort required to perform a task and optimize the workload to prevent cognitive overload or underload. This information can be used to enhance performance and productivity in various fields such as healthcare, education, and aviation. In this paper, we propose a method that uses EEG and deep neural networks to estimate the mental workload of human subjects during multitasking mental activities. Notably, our proposed method employs subject-independent classification. We use the "STEW" dataset, which consists of two tasks, namely "No task" and "simultaneous capacity (SIMKAP)-based multitasking activity". We estimate the different workload levels of two tasks using a composite framework consisting of brain connectivity and deep neural networks. After the initial preprocessing of EEG signals, an analysis of the relationships between the 14 EEG channels is conducted to evaluate effective brain connectivity. This assessment illustrates the information flow between various brain regions, utilizing the direct Directed Transfer Function (dDTF) method. Then, we propose a deep hybrid model based on pre-trained Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) for the classification of workload levels. The accuracy of the proposed deep model achieved 83.12% according to the subject-independent leave-subject-out (LSO) approach. The pre-trained CNN + LSTM approaches to EEG data have been found to be an accurate method for assessing the mental workload.

2.
Sci Rep ; 14(1): 17968, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39095527

ABSTRACT

As Europe integrates more renewable energy resources, notably offshore wind power, into its super meshed grid, the demand for reliable long-distance High Voltage Direct Current (HVDC) transmission systems has surged. This paper addresses the intricacies of HVDC systems built upon Modular Multi-Level Converters (MMCs), especially concerning the rapid rise of DC fault currents. We propose a novel fault identification and classification for DC transmission lines only by employing Long Short-Term Memory (LSTM) networks integrated with Discrete Wavelet Transform (DWT) for feature extraction. Our LSTM-based algorithm operates effectively under challenging environmental conditions, ensuring high fault resistance detection. A unique three-level relay system with multiple time windows (1 ms, 1.5 ms, and 2 ms) ensures accurate fault detection over large distances. Bayesian Optimization is employed for hyperparameter tuning, streamlining the model's training process. The study shows that our proposed framework exhibits 100% resilience against external faults and disturbances, achieving an average recognition accuracy rate of 99.04% in diverse testing scenarios. Unlike traditional schemes that rely on multiple manual thresholds, our approach utilizes a single intelligently tuned model to detect faults up to 480 ohms, enhancing the efficiency and robustness of DC grid protection.

3.
BMC Med Inform Decis Mak ; 24(1): 198, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039464

ABSTRACT

Genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. Machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. Thyroid cancer ranks as the 5th most prevalent cancer in the USA, with thousands diagnosed annually. This paper presents an ensemble learning model leveraging deep learning techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Units (GRUs), and Bi-directional LSTM (Bi-LSTM) to detect thyroid cancer mutations early. The model is trained on a dataset sourced from asia.ensembl.org and IntOGen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. Feature extraction encompasses techniques including Hahn moments, central moments, raw moments, and various matrix-based methods. Evaluation employs three testing methods: self-consistency test (SCT), independent set test (IST), and 10-fold cross-validation test (10-FCVT). The proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (IST). Statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, Mathew's Correlation Coefficient (MCC), loss, training accuracy, F1 Score, and Cohen's kappa are utilized for comprehensive evaluation.


Subject(s)
Deep Learning , Mutation , Thyroid Neoplasms , Humans , Thyroid Neoplasms/genetics , Thyroid Neoplasms/diagnosis , Disease Progression
4.
Med Biol Eng Comput ; 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39028484

ABSTRACT

Stroke is a neurological condition that usually results in the loss of voluntary control of body movements, making it difficult for individuals to perform activities of daily living (ADLs). Brain-computer interfaces (BCIs) integrated into robotic systems, such as motorized mini exercise bikes (MMEBs), have been demonstrated to be suitable for restoring gait-related functions. However, kinematic estimation of continuous motion in BCI systems based on electroencephalography (EEG) remains a challenge for the scientific community. This study proposes a comparative analysis to evaluate two artificial neural network (ANN)-based decoders to estimate three lower-limb kinematic parameters: x- and y-axis position of the ankle and knee joint angle during pedaling tasks. Long short-term memory (LSTM) was used as a recurrent neural network (RNN), which reached Pearson correlation coefficient (PCC) scores close to 0.58 by reconstructing kinematic parameters from the EEG features on the delta band using a time window of 250 ms. These estimates were evaluated through kinematic variance analysis, where our proposed algorithm showed promising results for identifying pedaling and rest periods, which could increase the usability of classification tasks. Additionally, negative linear correlations were found between pedaling speed and decoder performance, thereby indicating that kinematic parameters between slower speeds may be easier to estimate. The results allow concluding that the use of deep learning (DL)-based methods is feasible for the estimation of lower-limb kinematic parameters during pedaling tasks using EEG signals. This study opens new possibilities for implementing controllers most robust for MMEBs and BCIs based on continuous decoding, which may allow for maximizing the degrees of freedom and personalized rehabilitation.

5.
Micromachines (Basel) ; 15(7)2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39064375

ABSTRACT

Morse code recognition plays a very important role in the application of human-machine interaction. In this paper, based on the carbon nanotube (CNT) and polyurethane sponge (PUS) composite material, a flexible tactile CNT/PUS sensor with great piezoresistive characteristic is developed for detecting Morse code precisely. Thirty-six types of Morse code, including 26 letters (A-Z) and 10 numbers (0-9), are applied to the sensor. Each Morse code was repeated 60 times, and 2160 (36 × 60) groups of voltage time-sequential signals were collected to construct the dataset. Then, smoothing and normalization methods are used to preprocess and optimize the raw data. Based on that, the long short-term memory (LSTM) model with excellent feature extraction and self-adaptive ability is constructed to precisely recognize different types of Morse code detected by the sensor. The recognition accuracies of the 10-number Morse code, the 26-letter Morse code, and the whole 36-type Morse code are 99.17%, 95.37%, and 93.98%, respectively. Meanwhile, the Gated Recurrent Unit (GRU), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Random Forest (RF) models are built to distinguish the 36-type Morse code (letters of A-Z and numbers of 0-9) based on the same dataset and achieve the accuracies of 91.37%, 88.88%, 87.04%, and 90.97%, respectively, which are all lower than the accuracy of 93.98% based on the LSTM model. All the experimental results show that the CNT/PUS sensor can detect the Morse code's tactile feature precisely, and the LSTM model has a very efficient property in recognizing Morse code detected by the CNT/PUS sensor.

6.
Mar Pollut Bull ; 206: 116698, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39002215

ABSTRACT

The escalating growth of the global population has led to degraded water quality, particularly in seawater environments. Water quality monitoring is crucial to understanding the dynamic changes and implementing effective management strategies. In this study, water samples from the southwestern regions of Iran were spatially analyzed in a GIS environment using geostatistical methods. Subsequently, a water quality map was generated employing large and small fuzzy membership functions. Additionally, advanced prediction models using neural networks were employed to forecast future water pollution trends. Fuzzy method results indicated higher pollution levels in the northern regions of the study area compared to the southern parts. Furthermore, the water quality prediction models demonstrated that the LSTM model exhibited superior predictive performance (R2 = 0.93, RMSE = 0.007). The findings also underscore the impact of urbanization, power plant construction (2010 to 2020), and inadequate urban wastewater management on water pollution in the studied region.

7.
Environ Res ; 258: 119248, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38823615

ABSTRACT

To ensure the structural integrity of concrete and prevent unanticipated fracturing, real-time monitoring of early-age concrete's strength development is essential, mainly through advanced techniques such as nano-enhanced sensors. The piezoelectric-based electro-mechanical impedance (EMI) method with nano-enhanced sensors is emerging as a practical solution for such monitoring requirements. This study presents a strength estimation method based on Non-Destructive Testing (NDT) Techniques and Long Short-Term Memory (LSTM) and artificial neural networks (ANNs) as hybrid (NDT-LSTMs-ANN), including several types of concrete strength-related agents. Input data includes water-to-cement rate, temperature, curing time, and maturity based on interior temperature, allowing experimentally monitoring the development of concrete strength from the early steps of hydration and casting to the last stages of hardening 28 days after the casting. The study investigated the impact of various factors on concrete strength development, utilizing a cutting-edge approach that combines traditional models with nano-enhanced piezoelectric sensors and NDT-LSTMs-ANN enhanced with nanotechnology. The results demonstrate that the hybrid provides highly accurate concrete strength estimation for construction safety and efficiency. Adopting the piezoelectric-based EMI technique with these advanced sensors offers a viable and effective monitoring solution, presenting a significant leap forward for the construction industry's structural health monitoring practices.

8.
Network ; : 1-36, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38855971

ABSTRACT

Predicting the stock market is one of the significant chores and has a successful prediction of stock rates, and it helps in making correct decisions. The prediction of the stock market is the main challenge due to blaring, chaotic data as well as non-stationary data. In this research, the support vector machine (SVM) is devised for performing an effective stock market prediction. At first, the input time series data is considered and the pre-processing of data is done by employing a standard scalar. Then, the time intrinsic features are extracted and the suitable features are selected in the feature selection stage by eliminating other features using recursive feature elimination. Afterwards, the Long Short-Term Memory (LSTM) based prediction is done, wherein LSTM is trained to employ Aquila circle-inspired optimization (ACIO) that is newly introduced by merging Aquila optimizer (AO) with circle-inspired optimization algorithm (CIOA). On the other hand, delay-based matrix formation is conducted by considering input time series data. After that, convolutional neural network (CNN)-based prediction is performed, where CNN is tuned by the same ACIO. Finally, stock market prediction is executed utilizing SVM by fusing the predicted outputs attained from LSTM-based prediction and CNN-based prediction. Furthermore, the SVM attains better outcomes of minimum mean absolute percentage error; (MAPE) and normalized root-mean-square error (RMSE) with values about 0.378 and 0.294.

9.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38856168

ABSTRACT

Nucleic acid-binding proteins (NABPs), including DNA-binding proteins (DBPs) and RNA-binding proteins (RBPs), play important roles in essential biological processes. To facilitate functional annotation and accurate prediction of different types of NABPs, many machine learning-based computational approaches have been developed. However, the datasets used for training and testing as well as the prediction scopes in these studies have limited their applications. In this paper, we developed new strategies to overcome these limitations by generating more accurate and robust datasets and developing deep learning-based methods including both hierarchical and multi-class approaches to predict the types of NABPs for any given protein. The deep learning models employ two layers of convolutional neural network and one layer of long short-term memory. Our approaches outperform existing DBP and RBP predictors with a balanced prediction between DBPs and RBPs, and are more practically useful in identifying novel NABPs. The multi-class approach greatly improves the prediction accuracy of DBPs and RBPs, especially for the DBPs with ~12% improvement. Moreover, we explored the prediction accuracy of single-stranded DNA binding proteins and their effect on the overall prediction accuracy of NABP predictions.


Subject(s)
Computational Biology , DNA-Binding Proteins , Deep Learning , RNA-Binding Proteins , RNA-Binding Proteins/metabolism , DNA-Binding Proteins/metabolism , Computational Biology/methods , Neural Networks, Computer , Humans
10.
Sensors (Basel) ; 24(12)2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38931706

ABSTRACT

The remarkable human ability to predict others' intent during physical interactions develops at a very early age and is crucial for development. Intent prediction, defined as the simultaneous recognition and generation of human-human interactions, has many applications such as in assistive robotics, human-robot interaction, video and robotic surveillance, and autonomous driving. However, models for solving the problem are scarce. This paper proposes two attention-based agent models to predict the intent of interacting 3D skeletons by sampling them via a sequence of glimpses. The novelty of these agent models is that they are inherently multimodal, consisting of perceptual and proprioceptive pathways. The action (attention) is driven by the agent's generation error, and not by reinforcement. At each sampling instant, the agent completes the partially observed skeletal motion and infers the interaction class. It learns where and what to sample by minimizing the generation and classification errors. Extensive evaluation of our models is carried out on benchmark datasets and in comparison to a state-of-the-art model for intent prediction, which reveals that classification and generation accuracies of one of the proposed models are comparable to those of the state of the art even though our model contains fewer trainable parameters. The insights gained from our model designs can inform the development of efficient agents, the future of artificial intelligence (AI).


Subject(s)
Algorithms , Humans , Robotics/methods , Attention/physiology
11.
Sensors (Basel) ; 24(12)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38931751

ABSTRACT

This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).


Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Electroencephalography , Neural Networks, Computer , Electroencephalography/methods , Humans , Signal Processing, Computer-Assisted
12.
Sci Rep ; 14(1): 12033, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38797765

ABSTRACT

High speed side-view videos of sliding drops enable researchers to investigate drop dynamics and surface properties. However, understanding the physics of sliding requires knowledge of the drop width. A front-view perspective of the drop is necessary. In particular, the drop's width is a crucial parameter owing to its association with the friction force. Incorporating extra cameras or mirrors to monitor changes in the width of drops from a front-view perspective is cumbersome and limits the viewing area. This limitation impedes a comprehensive analysis of sliding drops, especially when they interact with surface defects. Our study explores the use of various regression and multivariate sequence analysis (MSA) models to estimate the drop width at a solid surface solely from side-view videos. This approach eliminates the need to incorporate additional equipment into the experimental setup. In addition, it ensures an unlimited viewing area of sliding drops. The Long Short Term Memory (LSTM) model with a 20 sliding window size has the best performance with the lowest root mean square error (RMSE) of 67 µm. Within the spectrum of drop widths in our dataset, ranging from 1.6 to 4.4 mm, this RMSE indicates that we can predict the width of sliding drops with an error of 2.4%. Furthermore, the applied LSTM model provides a drop width across the whole sliding length of 5 cm, previously unattainable.

13.
Sensors (Basel) ; 24(10)2024 May 10.
Article in English | MEDLINE | ID: mdl-38793895

ABSTRACT

Brain-computer interface (BCI) systems include signal acquisition, preprocessing, feature extraction, classification, and an application phase. In fNIRS-BCI systems, deep learning (DL) algorithms play a crucial role in enhancing accuracy. Unlike traditional machine learning (ML) classifiers, DL algorithms eliminate the need for manual feature extraction. DL neural networks automatically extract hidden patterns/features within a dataset to classify the data. In this study, a hand-gripping (closing and opening) two-class motor activity dataset from twenty healthy participants is acquired, and an integrated contextual gate network (ICGN) algorithm (proposed) is applied to that dataset to enhance the classification accuracy. The proposed algorithm extracts the features from the filtered data and generates the patterns based on the information from the previous cells within the network. Accordingly, classification is performed based on the similar generated patterns within the dataset. The accuracy of the proposed algorithm is compared with the long short-term memory (LSTM) and bidirectional long short-term memory (Bi-LSTM). The proposed ICGN algorithm yielded a classification accuracy of 91.23 ± 1.60%, which is significantly (p < 0.025) higher than the 84.89 ± 3.91 and 88.82 ± 1.96 achieved by LSTM and Bi-LSTM, respectively. An open access, three-class (right- and left-hand finger tapping and dominant foot tapping) dataset of 30 subjects is used to validate the proposed algorithm. The results show that ICGN can be efficiently used for the classification of two- and three-class problems in fNIRS-based BCI applications.


Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Neural Networks, Computer , Spectroscopy, Near-Infrared , Humans , Spectroscopy, Near-Infrared/methods , Male , Adult , Female , Young Adult , Brain/physiology , Brain/diagnostic imaging
14.
Sensors (Basel) ; 24(10)2024 May 18.
Article in English | MEDLINE | ID: mdl-38794070

ABSTRACT

The production of multivariate time-series data facilitates the continuous monitoring of production assets. The modelling approach of multivariate time series can reveal the ways in which parameters evolve as well as the influences amongst themselves. These data can be used in tandem with artificial intelligence methods to create insight on the condition of production equipment, hence potentially increasing the sustainability of existing manufacturing and production systems, by optimizing resource utilization, waste, and production downtime. In this context, a predictive maintenance method is proposed based on the combination of LSTM-Autoencoders and a Transformer encoder in order to enable the forecasting of asset failures through spatial and temporal time series. These neural networks are implemented into a software prototype. The dataset used for training and testing the models is derived from a metal processing industry case study. Ultimately, the goal is to train a remaining useful life (RUL) estimation model.

15.
J Biomed Inform ; 154: 104648, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38692464

ABSTRACT

BACKGROUND: Advances in artificial intelligence (AI) have realized the potential of revolutionizing healthcare, such as predicting disease progression via longitudinal inspection of Electronic Health Records (EHRs) and lab tests from patients admitted to Intensive Care Units (ICU). Although substantial literature exists addressing broad subjects, including the prediction of mortality, length-of-stay, and readmission, studies focusing on forecasting Acute Kidney Injury (AKI), specifically dialysis anticipation like Continuous Renal Replacement Therapy (CRRT) are scarce. The technicality of how to implement AI remains elusive. OBJECTIVE: This study aims to elucidate the important factors and methods that are required to develop effective predictive models of AKI and CRRT for patients admitted to ICU, using EHRs in the Medical Information Mart for Intensive Care (MIMIC) database. METHODS: We conducted a comprehensive comparative analysis of established predictive models, considering both time-series measurements and clinical notes from MIMIC-IV databases. Subsequently, we proposed a novel multi-modal model which integrates embeddings of top-performing unimodal models, including Long Short-Term Memory (LSTM) and BioMedBERT, and leverages both unstructured clinical notes and structured time series measurements derived from EHRs to enable the early prediction of AKI and CRRT. RESULTS: Our multimodal model achieved a lead time of at least 12 h ahead of clinical manifestation, with an Area Under the Receiver Operating Characteristic Curve (AUROC) of 0.888 for AKI and 0.997 for CRRT, as well as an Area Under the Precision Recall Curve (AUPRC) of 0.727 for AKI and 0.840 for CRRT, respectively, which significantly outperformed the baseline models. Additionally, we performed a SHapley Additive exPlanation (SHAP) analysis using the expected gradients algorithm, which highlighted important, previously underappreciated predictive features for AKI and CRRT. CONCLUSION: Our study revealed the importance and the technicality of applying longitudinal, multimodal modeling to improve early prediction of AKI and CRRT, offering insights for timely interventions. The performance and interpretability of our model indicate its potential for further assessment towards clinical applications, to ultimately optimize AKI management and enhance patient outcomes.


Subject(s)
Acute Kidney Injury , Electronic Health Records , Intensive Care Units , Acute Kidney Injury/therapy , Humans , Longitudinal Studies , Renal Replacement Therapy , Artificial Intelligence , Forecasting , Length of Stay , Male , Databases, Factual , Female
16.
Heliyon ; 10(7): e27860, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38689959

ABSTRACT

Time series forecasting across different domains has received massive attention as it eases intelligent decision-making activities. Recurrent neural networks and various deep learning algorithms have been applied to modeling and forecasting multivariate time series data. Due to intricate non-linear patterns and significant variations in the randomness of characteristics across various categories of real-world time series data, achieving effectiveness and robustness simultaneously poses a considerable challenge for specific deep-learning models. We have proposed a novel prediction framework with a multi-phase feature selection technique, a long short-term memory-based autoencoder, and a temporal convolution-based autoencoder to fill this gap. The multi-phase feature selection is applied to retrieve the optimal feature selection and optimal lag window length for different features. Moreover, the customized stacked autoencoder strategy is employed in the model. The first autoencoder is used to resolve the random weight initialization problem. Additionally, the second autoencoder models the temporal relation between non-linear correlated features with convolution networks and recurrent neural networks. Finally, the model's ability to generalize, predict accurately, and perform effectively is validated through experimentation with three distinct real-world time series datasets. In this study, we conducted experiments on three real-world datasets: Energy Appliances, Beijing PM2.5 Concentration, and Solar Radiation. The Energy Appliances dataset consists of 29 attributes with a training size of 15,464 instances and a testing size of 4239 instances. For the Beijing PM2.5 Concentration dataset, there are 18 attributes, with 34,952 instances in the training set and 8760 instances in the testing set. The Solar Radiation dataset comprises 11 attributes, with 22,857 instances in the training set and 9797 instances in the testing set. The experimental setup involved evaluating the performance of forecasting models using two distinct error measures: root mean square error and mean absolute error. To ensure robust evaluation, the errors were calculated at the identical scale of the data. The results of the experiments demonstrate the superiority of the proposed model compared to existing models, as evidenced by significant advantages in various metrics such as mean squared error and mean absolute error. For PM2.5 air quality data, the proposed model's mean absolute error is 7.51 over 12.45, about ∼40% improvement. Similarly, the mean square error for the dataset is improved from 23.75 to 11.62, which is ∼51%of improvement. For the solar radiation dataset, the proposed model resulted in ∼34.7% improvement in means squared error and ∼75% in mean absolute error. The recommended framework demonstrates outstanding capabilities in generalization and outperforms datasets spanning multiple indigenous domains.

17.
medRxiv ; 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38559064

ABSTRACT

Background: Advances in artificial intelligence (AI) have realized the potential of revolutionizing healthcare, such as predicting disease progression via longitudinal inspection of Electronic Health Records (EHRs) and lab tests from patients admitted to Intensive Care Units (ICU). Although substantial literature exists addressing broad subjects, including the prediction of mortality, length-of-stay, and readmission, studies focusing on forecasting Acute Kidney Injury (AKI), specifically dialysis anticipation like Continuous Renal Replacement Therapy (CRRT) are scarce. The technicality of how to implement AI remains elusive. Objective: This study aims to elucidate the important factors and methods that are required to develop effective predictive models of AKI and CRRT for patients admitted to ICU, using EHRs in the Medical Information Mart for Intensive Care (MIMIC) database. Methods: We conducted a comprehensive comparative analysis of established predictive models, considering both time-series measurements and clinical notes from MIMIC-IV databases. Subsequently, we proposed a novel multi-modal model which integrates embeddings of top-performing unimodal models, including Long Short-Term Memory (LSTM) and BioMedBERT, and leverages both unstructured clinical notes and structured time series measurements derived from EHRs to enable the early prediction of AKI and CRRT. Results: Our multimodal model achieved a lead time of at least 12 hours ahead of clinical manifestation, with an Area Under the Receiver Operating Characteristic Curve (AUROC) of 0.888 for AKI and 0.997 for CRRT, as well as an Area Under the Precision Recall Curve (AUPRC) of 0.727 for AKI and 0.840 for CRRT, respectively, which significantly outperformed the baseline models. Additionally, we performed a SHapley Additive exPlanation (SHAP) analysis using the expected gradients algorithm, which highlighted important, previously underappreciated predictive features for AKI and CRRT. Conclusion: Our study revealed the importance and the technicality of applying longitudinal, multimodal modeling to improve early prediction of AKI and CRRT, offering insights for timely interventions. The performance and interpretability of our model indicate its potential for further assessment towards clinical applications, to ultimately optimize AKI management and enhance patient outcomes.

18.
Heliyon ; 10(6): e27752, 2024 Mar 30.
Article in English | MEDLINE | ID: mdl-38560675

ABSTRACT

This study worked with Chunghwa Telecom to collect data from 17 rooftop solar photovoltaic plants installed on top of office buildings, warehouses, and computer rooms in northern, central and southern Taiwan from January 2021 to June 2023. A data pre-processing method combining linear regression and K Nearest Neighbor (k-NN) was proposed to estimate missing values for weather and power generation data. Outliers were processed using historical data and parameters highly correlated with power generation volumes were used to train an artificial intelligence (AI) model. To verify the reliability of this data pre-processing method, this study developed multilayer perceptron (MLP) and long short-term memory (LSTM) models to make short-term and medium-term power generation forecasts for the 17 solar photovoltaic plants. Study results showed that the proposed data pre-processing method reduced normalized root mean square error (nRMSE) for short- and medium-term forecasts in the MLP model by 17.47% and 11.06%, respectively, and also reduced the nRMSE for short- and medium-term forecasts in the LSTM model by 20.20% and 8.03%, respectively.

19.
Environ Monit Assess ; 196(5): 453, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38619639

ABSTRACT

This study seeks to investigate the impact of COVID-19 lockdown measures on air quality in the city of Mashhad employing two strategies. We initiated our research using basic statistical methods such as paired sample t-tests to compare hourly PM2.5 data in two scenarios: before and during quarantine, and pre- and post-lockdown. This initial analysis provided a broad understanding of potential changes in air quality. Notably, a low reduction of 2.40% in PM2.5 was recorded when compared to air quality prior to the lockdown period. This finding highlights the wide range of factors that impact the levels of particulate matter in urban settings, with the transportation sector often being widely recognized as one of the principal causes of this issue. Nevertheless, throughout the period after the quarantine, a remarkable decrease in air quality was observed characterized by distinct seasonal patterns, in contrast to previous years. This finding demonstrates a significant correlation between changes in human mobility patterns and their influence on the air quality of urban areas. It also emphasizes the need to use air pollution modeling as a fundamental tool to evaluate and understand these linkages to support long-term plans for reducing air pollution. To obtain a more quantitative understanding, we then employed cutting-edge machine learning methods, such as random forest and long short-term memory algorithms, to accurately determine the effect of the lockdown on PM2.5 levels. Our models' results demonstrated remarkable efficacy in assessing the pollutant concentration in Mashhad during lockdown measures. The test set yielded an R-squared value of 0.82 for the long short-term memory network model, whereas the random forest model showed a calculated cross-validation R-squared of 0.78. The required computational cost for training the LSTM and the RF models across all data was 25 min and 3 s, respectively. In summary, through the integration of statistical methods and machine learning, this research attempts to provide a comprehensive understanding of the impact of human interventions on air quality dynamics.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Communicable Disease Control , Environmental Monitoring , Machine Learning , Particulate Matter
20.
J Environ Manage ; 359: 120931, 2024 May.
Article in English | MEDLINE | ID: mdl-38678895

ABSTRACT

A deep learning architecture, denoted as CNNsLSTM, is proposed for hourly rainfall-runoff modeling in this study. The architecture involves a serial coupling of the one-dimensional convolutional neural network (1D-CNN) and the long short-term memory (LSTM) network. In the proposed framework, multiple layers of the CNN component process long-term hourly meteorological time series data, while the LSTM component handles short-term meteorological time series data and utilizes the extracted features from the 1D-CNN. In order to demonstrate the effectiveness of the proposed approach, it was implemented for hourly rainfall-runoff modeling in the Ishikari River watershed, Japan. A meteorological dataset, including precipitation, air temperature, evapotranspiration, longwave radiation, and shortwave radiation, was utilized as input. The results of the proposed approach (CNNsLSTM) were compared with those of previously proposed deep learning approaches used in hydrologic modeling, such as 1D-CNN, LSTM with only hourly inputs (LSTMwHour), a parallel architecture of 1D-CNN and LSTM (CNNpLSTM), and the LSTM architecture, which uses both daily and hourly input data (LSTMwDpH). Meteorological and runoff datasets were separated into training, validation, and test periods to train the deep learning model without overfitting, and evaluate the model with an independent dataset. The proposed approach clearly improved estimation accuracy compared to previously utilized deep learning approaches in rainfall = runoff modeling. In comparison with the observed flows, the median values of the Nash-Sutcliffe efficiency for the test period were 0.455-0.469 for 1D-CNN, 0.639-0.656 for CNNpLSTM, 0.745 for LSTMwHour, 0.831 for LSTMwDpH, and 0.865-0.873 for the proposed CNNsLSTM. Furthermore, the proposed CNNsLSTM reduced the median root mean square error (RMSE) of 1D-CNN by 50.2%-51.4%, CNNpLSTM by 37.4%-40.8%, LSTMwHour by 27.3%-29.5%, and LSTMwDpH by 10.6%-13.4%. Particularly, the proposed CNNsLSTM improved the estimations for high flows (≧75th percentile) and peak flows (≧95th percentile). The computational speed of LSTMwDpH is the fastest among the five architectures. Although the computation speed of CNNsLSTM is slower than LSTMwDpH's, it is still 6.9-7.9 times faster than that of LSTMwHour. Therefore, the proposed CNNsLSTM would be an effective approach for flood management and hydraulic structure design, mainly under climate change conditions that require estimating hourly river flows using meteorological datasets.


Subject(s)
Neural Networks, Computer , Rain , Hydrology , Models, Theoretical , Japan , Deep Learning
SELECTION OF CITATIONS
SEARCH DETAIL