Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Sci Rep ; 14(1): 23711, 2024 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-39390008

RESUMEN

A comprehensive digital transformation has been undergone by the oil and gas industry, wherein digital twins are leveraged to enable real-time data analysis, providing predictive and diagnostic engineering insights. The potential for developing intelligent oil and gas fields is substantial with the implementation of digital twins. A digital twin framework for gear rack drilling rigs is proposed, built upon an understanding of the digital twin composition and characteristics of the gear rack drilling rig lifting system. The framework encompasses descriptions of digital twin characteristics specific to drilling rigs, the application environment, and behavioral rules. The modeling approach integrates mechanism modeling, real-time performance response, instantaneous data transmission, and data visualization. To illustrate this framework, exemplary case studies involving the transmission unit and support unit of the lifting system are presented. Mechanism models are constructed to analyze dynamic gear performance and support unit response. Real-time data transmission is facilitated through sensor-based monitoring, enhancing the prediction speed and accuracy of dynamic performance through a synergy of mechanism modeling, machine learning, and real-time data analysis. The digital twin of the lifting system is visualized utilizing the Unity3D platform. Furthermore, functionalities on data acquisition, processing, and visualization across diverse application scenarios are encapsulated into modular components, streamlining the creation of high-fidelity digital twins. The frameworks and modeling methodologies presented herein can serve as a foundational and methodological guide for the exploration and implementation of digital twin technology within the oil and gas industry, ultimately fostering its advancement in this sector.

2.
J Biomed Inform ; 156: 104665, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38852777

RESUMEN

OBJECTIVE: Develop a new method for continuous prediction that utilizes a single temporal pattern ending with an event of interest and its multiple instances detected in the temporal data. METHODS: Use temporal abstraction to transform time series, instantaneous events, and time intervals into a uniform representation using symbolic time intervals (STIs). Introduce a new approach to event prediction using a single time intervals-related pattern (TIRP), which can learn models to predict whether and when an event of interest will occur, based on multiple instances of a pattern that end with the event. RESULTS: The proposed methods achieved an average improvement of 5% AUROC over LSTM-FCN, the best-performed baseline model, out of the evaluated baseline models (RawXGB, Resnet, LSTM-FCN, and ROCKET) that were applied to real-life datasets. CONCLUSION: The proposed methods for predicting events continuously have the potential to be used in a wide range of real-world and real-time applications in diverse domains with heterogeneous multivariate temporal data. For example, it could be used to predict panic attacks early using wearable devices or to predict complications early in intensive care unit patients.


Asunto(s)
Algoritmos , Humanos , Redes Neurales de la Computación
3.
Biomed Eng Lett ; 14(3): 393-405, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38645587

RESUMEN

Transcranial magnetic stimulation (TMS) is a device-based neuromodulation technique increasingly used to treat brain diseases. Electric field (E-field) modeling is an important technique in several TMS clinical applications, including the precision stimulation of brain targets with accurate stimulation density for the treatment of mental disorders and the localization of brain function areas for neurosurgical planning. Classical methods for E-field modeling usually take a long computation time. Fast algorithms are usually developed with significantly lower spatial resolutions that reduce the prediction accuracy and limit their usage in real-time or near real-time TMS applications. This review paper discusses several modern algorithms for real-time or near real-time TMS E-field modeling and their advantages and limitations. The reviewed methods include techniques such as basis representation techniques and deep neural-network-based methods. This paper also provides a review of software tools that can integrate E-field modeling with navigated TMS, including a recent software for real-time navigated E-field mapping based on deep neural-network models.

4.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38483283

RESUMEN

It is difficult to characterize complex variations of biological processes, often longitudinally measured using biomarkers that yield noisy data. While joint modeling with a longitudinal submodel for the biomarker measurements and a survival submodel for assessing the hazard of events can alleviate measurement error issues, the continuous longitudinal submodel often uses random intercepts and slopes to estimate both between- and within-patient heterogeneity in biomarker trajectories. To overcome longitudinal submodel challenges, we replace random slopes with scaled integrated fractional Brownian motion (IFBM). As a more generalized version of integrated Brownian motion, IFBM reasonably depicts noisily measured biological processes. From this longitudinal IFBM model, we derive novel target functions to monitor the risk of rapid disease progression as real-time predictive probabilities. Predicted biomarker values from the IFBM submodel are used as inputs in a Cox submodel to estimate event hazard. This two-stage approach to fit the submodels is performed via Bayesian posterior computation and inference. We use the proposed approach to predict dynamic lung disease progression and mortality in women with a rare disease called lymphangioleiomyomatosis who were followed in a national patient registry. We compare our approach to those using integrated Ornstein-Uhlenbeck or conventional random intercepts-and-slopes terms for the longitudinal submodel. In the comparative analysis, the IFBM model consistently demonstrated superior predictive performance.


Asunto(s)
Nonoxinol , Humanos , Femenino , Teorema de Bayes , Probabilidad , Biomarcadores , Progresión de la Enfermedad
5.
Sci Rep ; 14(1): 3498, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38347034

RESUMEN

The vibration of tunnel boring machine (TBM) is very difficult to monitor on sites, and related research on prediction methods is rare. Based on the field tunnelling test of a TBM in the Xinjiang Ehe project, the vibration information of the main beam of the TBM under different surrounding rock conditions is collected. The relationships among the tunnelling parameters, surrounding rock parameters and vibration parameters were studied. The results show that the penetration, cutter head speed, torque and thrust are important parameters affecting TBM vibration. In addition, the field penetration index and cutter head driving power index are significantly related to the root mean square of acceleration. Based on this, a multiple regression prediction model of TBM vibration is established. The model was verified and analysed via field projects, and the relative prediction error was less than 12%. This method can be used to predict the vibration of a TBM in real time through characteristic parameters without the use of a traditional monitoring system. This approach is highly important for determining the status of TBM equipment in real time.

6.
Accid Anal Prev ; 195: 107407, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38056024

RESUMEN

Driven by advancements in data-driven methods, recent developments in proactive crash prediction models have primarily focused on implementing machine learning and artificial intelligence. However, from a causal perspective, statistical models are preferred for their ability to estimate effect sizes using variable coefficients and elasticity effects. Most statistical framework-based crash prediction models adopt a case-control approach, matching crashes to non-crash events. However, accurately defining the crash-to-non-crash ratio and incorporating crash severities pose challenges. Few studies have ventured beyond the case-control approach to develop proactive crash prediction models, such as the duration-based framework. This study extends the duration-based modeling framework to create a novel framework for predicting crashes and their severity. Addressing the increased computational complexity resulting from incorporating crash severities, we explore a tradeoff between model performance and estimation time. Results indicate that a 15 % sample drawn at the epoch level achieves a balanced approach, reducing data size while maintaining reasonable predictive accuracy. Furthermore, stability analysis of predictor variables across different samples reveals that variables such as Time of day (Early afternoon), Weather condition (Clear), Lighting condition (Daytime), Illumination (Illuminated), and Volume require larger samples for more accurate coefficient estimation. Conversely, Daytime (Early morning, Late morning, Late afternoon), Lighting condition (Dark lighted), Terrain (Flat), Land use (Commercial, Rural), Number of lanes, and Speed converge towards true estimates with small incremental increases in sample size. The validation reveals that the model performs better in highway segments experiencing more frequent crashes (segments where the duration between crashes is less than 100 h, or approximately 4 days).


Asunto(s)
Accidentes de Tránsito , Inteligencia Artificial , Humanos , Modelos Estadísticos , Población Rural , Tamaño de la Muestra , Modelos Logísticos
7.
IUBMB Life ; 76(1): 53-68, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37606159

RESUMEN

Long non-coding RNAs (lncRNAs) play a significant role in various biological processes. Hence, it is utmost important to elucidate their functions in order to understand the molecular mechanism of a complex biological system. This versatile RNA molecule has diverse modes of interaction, one of which constitutes lncRNA-mRNA interaction. Hence, identifying its target mRNA is essential to understand the function of an lncRNA explicitly. Existing lncRNA target prediction tools mainly adopt thermodynamics approach. Large execution time and inability to perform real-time prediction limit their usage. Further, lack of negative training dataset has been a hindrance in the path of developing machine learning (ML) based lncRNA target prediction tools. In this work, we have developed a ML-based lncRNA-mRNA target prediction model- 'LncRTPred'. Here we have addressed the existing problems by generating reliable negative dataset and creating robust ML models. We have identified the non-interacting lncRNA and mRNAs from the unlabelled dataset using BLAT. It is further filtered to get a reliable set of outliers. LncRTPred provides a cumulative_model_score as the final output against each query. In terms of prediction accuracy, LncRTPred outperforms other popular target prediction protocols like LncTar. Further, we have tested its performance against experimentally validated disease-specific lncRNA-mRNA interactions. Overall, performance of LncRTPred is heavily dependent on the size of the training dataset, which is highly reflected by the difference in its performance for human and mouse species. Its performance for human species shows better as compared to that for mouse when applied on an unknown data due to smaller size of the training dataset in case of mouse compared to that of human. Availability of increased number of lncRNA-mRNA interaction data for mouse will improve the performance of LncRTPred in future. Both webserver and standalone versions of LncRTPred are available. Web server link: http://bicresources.jcbose.ac.in/zhumur/lncrtpred/index.html. Github Link: https://github.com/zglabDIB/LncRTPred.


Asunto(s)
ARN Largo no Codificante , Humanos , Animales , Ratones , ARN Largo no Codificante/genética , ARN Mensajero/genética , Biología Computacional/métodos
8.
Water Res ; 250: 121018, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38113592

RESUMEN

Ensuring the safety and reliability of drinking water supply requires accurate prediction of water quality in water distribution networks (WDNs). However, existing hydraulic model-based approaches for system state prediction face challenges in model calibration with limited sensor data and intensive computing requirements, while current machine learning models are lack of capacity to predict the system states at sites that are not monitored or included in model training. To address these gaps, this study proposes a novel gated graph neural network (GGNN) model for real-time water quality prediction in WDNs. The GGNN model integrates hydraulic flow directions and water quality data to represent the topology and system dynamics, and employs a masking operation for training to enhance prediction accuracy. Evaluation results from a real-world WDN demonstrate that the GGNN model is capable to achieve accurate water quality prediction across the entire WDN. Despite being trained with water quality data from a limited number of sensor sites, the model can achieve high predictive accuracies (Mean Absolute Error = 0.07 mg L-1 and Mean Absolute Percentage Error = 10.0 %) across the entire network including those unmonitored sites. Furthermore, water quality-based sensor placement significantly improves predictive accuracy, emphasizing the importance of careful sensor location selection. This research advances water quality prediction in WDNs by offering a practical and effective machine learning solution to address challenges related to limited sensor data and network complexity. This study provides a first step towards developing machine learning models to replace hydraulic models in WDN modelling.


Asunto(s)
Redes Neurales de la Computación , Calidad del Agua , Reproducibilidad de los Resultados , Abastecimiento de Agua
9.
Data Brief ; 51: 109767, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38075623

RESUMEN

Monitoring of milk composition can support several dimensions of dairy management such as identification of the health status of individual dairy cows and the safeguarding of dairy quality. The quantification of milk composition has been traditionally executed employing destructive chemical or laboratory Fourier-transform infrared (FTIR) spectroscopy analyses which can incur high costs and prolonged waiting times for continuous monitoring. Therefore, modern technology for milk composition quantification relies on non-destructive near-infrared (NIR) spectroscopy which is not invasive and can be performed on-farm, in real-time. The current dataset contains NIR spectral measurements in transmittance mode in the wavelength range from 960 nm to 1690 nm of 1224 individual raw milk samples, collected on-farm over an eight-week span in 2017, at the experimental dairy farm of the province of Antwerp, 'Hooibeekhoeve' (Geel, Belgium). For these spectral measurements, laboratory reference values corresponding to the three main components of raw milk (fat, protein and lactose), urea and somatic cell count (SCC) are included. This data has been used to build multivariate calibration models to predict the three milk compounds, as well as develop strategies to monitor the prediction performance of the calibration models.

10.
Waste Manag ; 170: 93-102, 2023 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-37562201

RESUMEN

The immeasurability of real-time dioxin emissions is the principal limitation to controlling and reducing dioxin emissions in municipal solid waste incineration (MSWI). Existing methods for dioxin emissions prediction are based on machine learning with inadequate dioxin datasets. In this study, the deep learning models are trained through larger online dioxin emissions data from a waste incinerator to predict real-time dioxin emissions. First, data are collected and the operating data are preprocessed. Then, the dioxin emission prediction performance of the machine learning and deep learning models, including long short-term memory (LSTM) and convolutional neural networks (CNN), with normal input and time-series input are compared. We evaluate the applicability of each model and find that the performance of the deep learning models (LSTM and CNN) has improved by 36.5% and 30.4%, respectively, in terms of the mean square error (MSE) with the time-series input. Moreover, through feature analysis, we find that temperature, airflow, and time dimension are considerable for dioxin prediction. The results are meaningful for optimizing the control of dioxins from MSWI.

11.
Heliyon ; 9(4): e15163, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37095970

RESUMEN

Early purchase prediction plays a vital role for an e-commerce website. It enables e-shoppers to enlist consumers for product suggestions, offer discount and for many other interventions. Several work has already been done using session log for analyzing customer behavior whether he performs a purchase on the product or not. In most cases, it is difficult to find out and make a list of customers and offer them discount when their session ends. In this paper, we propose a customer's purchase intention prediction model where e-shoppers can detect customer's purpose earlier. First, we apply feature selection technique to select best features. Then the extracted features are fed to train supervised learning models. Several classifiers like support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), decision tree (DT), and XGBoost classifiers have been applied along with oversampling method for balancing the dataset. The experiments were performed on a standard benchmark dataset. Experimental results show that XGBoost classifier with feature selection techniques and oversampling method has the significantly higher area under ROC curve (auROC) score and are under precision-recall curve (auPR) score which are 0.937 and 0.754 respectively. On the other hand accuracy achieved by XGBoost and Decision tree are significantly improved and they are 90.65% and 90.54% respectively. Overall performance of the gradient boosting method is significantly improved compared to other classifiers and state-of-the-art methods. In addition to this, a method for explainable analysis on the problem was outlined.

12.
Nephrol Dial Transplant ; 38(7): 1761-1769, 2023 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-37055366

RESUMEN

BACKGROUND: In maintenance hemodialysis patients, intradialytic hypotension (IDH) is a frequent complication that has been associated with poor clinical outcomes. Prediction of IDH may facilitate timely interventions and eventually reduce IDH rates. METHODS: We developed a machine learning model to predict IDH in in-center hemodialysis patients 15-75 min in advance. IDH was defined as systolic blood pressure (SBP) <90 mmHg. Demographic, clinical, treatment-related and laboratory data were retrieved from electronic health records and merged with intradialytic machine data that were sent in real-time to the cloud. For model development, dialysis sessions were randomly split into training (80%) and testing (20%) sets. The area under the receiver operating characteristic curve (AUROC) was used as a measure of the model's predictive performance. RESULTS: We utilized data from 693 patients who contributed 42 656 hemodialysis sessions and 355 693 intradialytic SBP measurements. IDH occurred in 16.2% of hemodialysis treatments. Our model predicted IDH 15-75 min in advance with an AUROC of 0.89. Top IDH predictors were the most recent intradialytic SBP and IDH rate, as well as mean nadir SBP of the previous 10 dialysis sessions. CONCLUSIONS: Real-time prediction of IDH during an ongoing hemodialysis session is feasible and has a clinically actionable predictive performance. If and to what degree this predictive information facilitates the timely deployment of preventive interventions and translates into lower IDH rates and improved patient outcomes warrants prospective studies.


Asunto(s)
Hipotensión , Fallo Renal Crónico , Humanos , Fallo Renal Crónico/terapia , Fallo Renal Crónico/complicaciones , Estudios Prospectivos , Nube Computacional , Hipotensión/diagnóstico , Hipotensión/etiología , Diálisis Renal/efectos adversos , Presión Sanguínea
13.
JMIR Form Res ; 7: e42452, 2023 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-37000488

RESUMEN

BACKGROUND: Sepsis is a leading cause of death in patients with trauma, and the risk of mortality increases significantly for each hour of delay in treatment. A hypermetabolic baseline and explosive inflammatory immune response mask clinical signs and symptoms of sepsis in trauma patients, making early diagnosis of sepsis more challenging. Machine learning-based predictive modeling has shown great promise in evaluating and predicting sepsis risk in the general intensive care unit (ICU) setting, but there has been no sepsis prediction model specifically developed for trauma patients so far. OBJECTIVE: To develop a machine learning model to predict the risk of sepsis at an hourly scale among ICU-admitted trauma patients. METHODS: We extracted data from adult trauma patients admitted to the ICU at Beth Israel Deaconess Medical Center between 2008 and 2019. A total of 42 raw variables were collected, including demographics, vital signs, arterial blood gas, and laboratory tests. We further derived a total of 485 features, including measurement pattern features, scoring features, and time-series variables, from the raw variables by feature engineering. The data set was randomly split into 70% for model development with stratified 5-fold cross-validation, 15% for calibration, and 15% for testing. An Extreme Gradient Boosting (XGBoost) model was developed to predict the hourly risk of sepsis at prediction windows of 4, 6, 8, 12, and 24 hours. We evaluated model performance for discrimination and calibration both at time-step and outcome levels. Clinical applicability of the model was evaluated with varying levels of precision, and the potential clinical net benefit was assessed with decision curve analysis (DCA). A Shapley additive explanation algorithm was applied to show the effect of features on the prediction model. In addition, we trained an L2-regularized logistic regression model to compare its performance with XGBoost. RESULTS: We included 4603 trauma patients in the study, 1196 (26%) of whom developed sepsis. The XGBoost model achieved an area under the receiver operating characteristics curve (AUROC) ranging from 0.83 to 0.88 at the 4-to-24-hour prediction window in the test set. With a ratio of 9 false alerts for every true alert, it predicted 73% (386/529) of sepsis-positive timesteps and 91% (163/179) of sepsis events in the subsequent 6 hours. The DCA showed our model had a positive net benefit in the threshold probability range of 0 to 0.6. In comparison, the logistic regression model achieved lower performance, with AUROC ranging from 0.76 to 0.84 at the 4-to-24-hour prediction window. CONCLUSIONS: The machine learning-based model had good discrimination and calibration performance for sepsis prediction in critical trauma patients. Using the model in clinical practice might help to identify patients at risk of sepsis in a time window that enables personalized intervention and early treatment.

14.
J Biomed Inform ; 139: 104310, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36773821

RESUMEN

It is extremely important to identify patients with acute pancreatitis who are at high risk for developing persistent organ failures early in the course of the disease. Due to the irregularity of longitudinal data and the poor interpretability of complex models, many models used to identify acute pancreatitis patients with a high risk of organ failure tended to rely on simple statistical models and limited their application to the early stages of patient admission. With the success of recurrent neural networks in modeling longitudinal medical data and the development of interpretable algorithms, these problems can be well addressed. In this study, we developed a novel model named Multi-task and Time-aware Gated Recurrent Unit RNN (MT-GRU) to directly predict organ failure in patients with acute pancreatitis based on irregular medical EMR data. Our proposed end-to-end multi-task model achieved significantly better performance compared to two-stage models. In addition, our model not only provided an accurate early warning of organ failure for patients throughout their hospital stay, but also demonstrated individual and population-level important variables, allowing physicians to understand the scientific basis of the model for decision-making. By providing early warning of the risk of organ failure, our proposed model is expected to assist physicians in improving outcomes for patients with acute pancreatitis.


Asunto(s)
Pancreatitis , Humanos , Enfermedad Aguda , Tiempo de Internación , Redes Neurales de la Computación , Algoritmos
15.
Mar Pollut Bull ; 186: 114423, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36495609

RESUMEN

The Secchi disk depth (SD) is an important parameter in aquatic ecosystem monitoring. As algal growth depends on solar irradiation, the SD - a measure of light extinction - gives an indirect indication of the chlorophyll concentration. However, most SD measurements are manually based and too sparse to resolve water quality variations during algal blooms. A remotely controlled automatic system for field measurement of light extinction has been developed and installed in three marine fish culture zones in Hong Kong. The visual images of the disk at different prescribed depths and the surrounding water are taken. Based on the contrast theory and image analysis, the recorded light intensity distributions can be analyzed to give the SD and the light extinction coefficient. The method has been extensively verified by field data over a wide range of water quality and hydro-meteorological conditions. The proposed system enables high frequency SD measurements on demand for environmental management and emergency response.


Asunto(s)
Ecosistema , Monitoreo del Ambiente , Animales , Monitoreo del Ambiente/métodos , Clorofila/análisis , Calidad del Agua , Eutrofización
16.
Food Chem ; 404(Pt B): 134632, 2023 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-36279783

RESUMEN

Detection and prevention of fish food fraud are of ever-increasing importance, prompting the need for rapid, high-throughput fish speciation techniques. Rapid Evaporative Ionisation Mass Spectrometry (REIMS) has quickly established itself as a powerful technique for the instant in situ analysis of foodstuffs. In the current study, a total of 1736 samples (2015-2021) - comprising 17 different commercially valuable fish species - were analysed using iKnife-REIMS, followed by classification with various multivariate and machine learning strategies. The results demonstrated that multivariate models, i.e. PCA-LDA and (O)PLS-DA, delivered accuracies from 92.5 to 100.0%, while RF and SVM-based classification generated accuracies from 88.7 to 96.3%. Real-time recognition on a separate test set of 432 samples (2022) generated correct speciation between 89.6 and 99.5% for the multivariate models, while the ML models underperformed (22.3-95.1%), in particular for the white fish species. As such, we propose a real-time validated modelling strategy using directly amenable PCA-LDA for rapid industry-proof large-scale fish speciation.


Asunto(s)
Aprendizaje Automático , Alimentos Marinos , Animales , Espectrometría de Masas/métodos , Análisis Espectral , Peces
17.
Sci Total Environ ; 847: 157542, 2022 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-35878857

RESUMEN

The selective catalytic reduction (SCR) denitration technology is widely used in coal-fired generating units. The NOx concentration of boiler outlet is an important parameter in the feedforward control of SCR denitration. However, its measurement lag leads to a large range fluctuation of NOx emission, which affects the safe and economic operation of the unit. In order to solve the problem of boiler outlet NOx concentration measurement lag in denitration control, and improve the timeliness of fluctuation response for denitration control. Many studies have reported on NOx concentration prediction models based on the long short-term memory (LSTM) algorithm, support vector machines (SVM) algorithm, et al. However, there are no reports on online modeling, particularly none on predictive values of boiler outlet NOx concentration ahead of the measured values. Thus, in this study, a 1000 MW ultra-supercritical coal-fired boiler was selected, and 2404 sets of measured samples were collected to predict NOx concentration. A novel online modeling method for NOx concentration of boiler outlet was proposed. For the first time, a high-precision online real-time prediction model of boiler outlet NOx concentration was innovatively established based on improved long short-term memory network (ILSTM). A feature quantity weight analysis method based on the RRelieff algorithm is adopted, and the change rates of feature quantities were used as input in the model. The results showed that the root mean square error (δR) and computation time of ILSTMN reduced relatively by 17.97 % and 1.97 s, respectively. The online model with satisfied accuracy is trained in 1 s, which uses the latest recent data from decentralized control system (DCS). The NOx concentration of boiler outlet predicted by the online model is 22 s ahead of the measurement NOx concentration, and the prediction accuracy is still as high as 96 % without the intervention after two years of operation. As a feedforward of SCR denitration control system, NOx concentration predicted by the model can significantly improve the timeliness of control response. The online model provides theoretical support for suppressing large fluctuations of NOx emissions.


Asunto(s)
Contaminantes Atmosféricos , Carbón Mineral , Contaminantes Atmosféricos/análisis , Algoritmos , Catálisis , Carbón Mineral/análisis , Centrales Eléctricas
18.
Comput Biol Med ; 147: 105559, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35635901

RESUMEN

Continuous monitoring of high-risk patients and early prediction of severe outcomes is crucial to prevent avoidable deaths. Current clinical monitoring is primarily based on intermittent observation of vital signs and the early warning scores (EWS). The drawback is lack of time series dynamics and correlations among vital signs. This study presents an approach to real-time outcome prediction based on machine learning from continuous recording of vital signs. Systolic blood pressure, diastolic blood pressure, heart rate, pulse rate, respiration rate and peripheral blood oxygen saturation were continuously acquired by wearable devices from 292 post-operative high-risk patients. The outcomes from serious complications were evaluated based on review of patients' medical record. The descriptive statistics of vital signs and patient demographic information were used as features. Four machine learning models K-Nearest-Neighbors (KNN), Decision Trees (DT), Random Forest (RF), and Boosted Ensemble (BE) were trained and tested. In static evaluation, all four models had comparable prediction performance to that of the state of the art. In dynamic evaluation, the models trained from the static evaluation were tested with continuous data. RF and BE obtained the lower false positive rate (FPR) of 0.073 and 0.055 on no-outcome patients respectively. The four models KNN, DT, RF and BE had area under receiver operating characteristic curve (AUROC) of 0.62, 0.64, 0.65 and 0.64 respectively on outcome patients. RF was found to be optimal model with lower FPR on no-outcome patients and a higher AUROC on outcome patients. These findings are encouraging and indicate that additional investigations must focus on validating performance in a clinical setting before deployment of the real-time outcome prediction.


Asunto(s)
Aprendizaje Automático , Signos Vitales , Área Bajo la Curva , Humanos , Oximetría , Curva ROC
19.
J Supercomput ; 78(5): 7078-7105, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34754141

RESUMEN

The COronaVIrus Disease 2019 (COVID-19) pandemic is unfortunately highly transmissible across the people. In order to detect and track the suspected COVID-19 infected people and consequently limit the pandemic spread, this paper entails a framework integrating the machine learning (ML), cloud, fog, and Internet of Things (IoT) technologies to propose a novel smart COVID-19 disease monitoring and prognosis system. The proposal leverages the IoT devices that collect streaming data from both medical (e.g., X-ray machine, lung ultrasound machine, etc.) and non-medical (e.g., bracelet, smartwatch, etc.) devices. Moreover, the proposed hybrid fog-cloud framework provides two kinds of federated ML as a service (federated MLaaS); (i) the distributed batch MLaaS that is implemented on the cloud environment for a long-term decision-making, and (ii) the distributed stream MLaaS, which is installed into a hybrid fog-cloud environment for a short-term decision-making. The stream MLaaS uses a shared federated prediction model stored into the cloud, whereas the real-time symptom data processing and COVID-19 prediction are done into the fog. The federated ML models are determined after evaluating a set of both batch and stream ML algorithms from the Python's libraries. The evaluation considers both the quantitative (i.e., performance in terms of accuracy, precision, root mean squared error, and F1 score) and qualitative (i.e., quality of service in terms of server latency, response time, and network latency) metrics to assess these algorithms. This evaluation shows that the stream ML algorithms have the potential to be integrated into the COVID-19 prognosis allowing the early predictions of the suspected COVID-19 cases.

20.
Front Med (Lausanne) ; 8: 775047, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34926518

RESUMEN

Sepsis-associated coagulation dysfunction greatly increases the mortality of sepsis. Irregular clinical time-series data remains a major challenge for AI medical applications. To early detect and manage sepsis-induced coagulopathy (SIC) and sepsis-associated disseminated intravascular coagulation (DIC), we developed an interpretable real-time sequential warning model toward real-world irregular data. Eight machine learning models including novel algorithms were devised to detect SIC and sepsis-associated DIC 8n (1 ≤ n ≤ 6) hours prior to its onset. Models were developed on Xi'an Jiaotong University Medical College (XJTUMC) and verified on Beth Israel Deaconess Medical Center (BIDMC). A total of 12,154 SIC and 7,878 International Society on Thrombosis and Haemostasis (ISTH) overt-DIC labels were annotated according to the SIC and ISTH overt-DIC scoring systems in train set. The area under the receiver operating characteristic curve (AUROC) were used as model evaluation metrics. The eXtreme Gradient Boosting (XGBoost) model can predict SIC and sepsis-associated DIC events up to 48 h earlier with an AUROC of 0.929 and 0.910, respectively, and even reached 0.973 and 0.955 at 8 h earlier, achieving the highest performance to date. The novel ODE-RNN model achieved continuous prediction at arbitrary time points, and with an AUROC of 0.962 and 0.936 for SIC and DIC predicted 8 h earlier, respectively. In conclusion, our model can predict the sepsis-associated SIC and DIC onset up to 48 h in advance, which helps maximize the time window for early management by physicians.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...