Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Anesth Analg ; 138(3): 645-654, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38364244

RESUMEN

BACKGROUND: Transfusion of packed red blood cells (pRBCs) is still associated with risks. This study aims to determine whether renal function deterioration in the context of individual transfusions in individual patients can be predicted using machine learning. Recipient and donor characteristics linked to increased risk are identified. METHODS: This study was registered at ClinicalTrials.gov (NCT05466370) and was conducted after local ethics committee approval. We evaluated 3366 transfusion episodes from a university hospital between October 31, 2016, and August 31, 2020. Random forest models were tuned and trained via Python auto-sklearn package to predict acute kidney injury (AKI). The models included recipients' and donors' demographic parameters and laboratory values, donor questionnaire results, and the age of the pRBCs. Bootstrapping on the test dataset was used to calculate the means and standard deviations of various performance metrics. RESULTS: AKI as defined by a modified Kidney Disease Improving Global Outcomes (KDIGO) criterion developed after 17.4% transfusion episodes (base rate). AKI could be predicted with an area under the curve of the receiver operating characteristic (AUC-ROC) of 0.73 ± 0.02. The negative (NPV) and positive (PPV) predictive values were 0.90 ± 0.02 and 0.32 ± 0.03, respectively. Feature importance and relative risk analyses revealed that donor features were far less important than recipient features for predicting posttransfusion AKI. CONCLUSIONS: Surprisingly, only the recipients' characteristics played a decisive role in AKI prediction. Based on this result, we speculate that the selection of a specific pRBC may have less influence than recipient characteristics.


Asunto(s)
Lesión Renal Aguda , Riñón , Humanos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/etiología , Lesión Renal Aguda/terapia , Transfusión Sanguínea , Estudios Retrospectivos , Medición de Riesgo/métodos , Curva ROC
2.
Sci Rep ; 13(1): 22641, 2023 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-38114635

RESUMEN

Machine learning (ML) has revolutionized data processing in recent years. This study presents the results of the first prediction models based on a long-term monocentric data registry of patients with microsurgically treated unruptured intracranial aneurysms (UIAs) using a temporal train-test split. Temporal train-test splits allow to simulate prospective validation, and therefore provide more accurate estimations of a model's predictive quality when applied to future patients. ML models for the prediction of the Glasgow outcome scale, modified Rankin Scale (mRS), and new transient or permanent neurological deficits (output variables) were created from all UIA patients that underwent microsurgery at the Kepler University Hospital Linz (Austria) between 2002 and 2020 (n = 466), based on 18 patient- and 10 aneurysm-specific preoperative parameters (input variables). Train-test splitting was performed with a temporal split for outcome prediction in microsurgical therapy of UIA. Moreover, an external validation was conducted on an independent external data set (n = 256) of the Department of Neurosurgery, University Medical Centre Hamburg-Eppendorf. In total, 722 aneurysms were included in this study. A postoperative mRS > 2 was best predicted by a quadratic discriminant analysis (QDA) estimator in the internal test set, with an area under the receiver operating characteristic curve (ROC-AUC) of 0.87 ± 0.03 and a sensitivity and specificity of 0.83 ± 0.08 and 0.71 ± 0.07, respectively. A Multilayer Perceptron predicted the post- to preoperative mRS difference > 1 with a ROC-AUC of 0.70 ± 0.02 and a sensitivity and specificity of 0.74 ± 0.07 and 0.50 ± 0.04, respectively. The QDA was the best model for predicting a permanent new neurological deficit with a ROC-AUC of 0.71 ± 0.04 and a sensitivity and specificity of 0.65 ± 0.24 and 0.60 ± 0.12, respectively. Furthermore, these models performed significantly better than the classic logistic regression models (p < 0.0001). The present results showed good performance in predicting functional and clinical outcomes after microsurgical therapy of UIAs in the internal data set, especially for the main outcome parameters, mRS and permanent neurological deficit. The external validation showed poor discrimination with ROC-AUC values of 0.61, 0.53 and 0.58 respectively for predicting a postoperative mRS > 2, a pre- and postoperative difference in mRS > 1 point and a GOS < 5. Therefore, generalizability of the models could not be demonstrated in the external validation. A SHapley Additive exPlanations (SHAP) analysis revealed that this is due to the most important features being distributed quite differently in the internal and external data sets. The implementation of newly available data and the merging of larger databases to form more broad-based predictive models is imperative in the future.


Asunto(s)
Aneurisma Intracraneal , Humanos , Aneurisma Intracraneal/diagnóstico , Aneurisma Intracraneal/cirugía , Pronóstico , Escala de Consecuencias de Glasgow , Procedimientos Neuroquirúrgicos/métodos , Aprendizaje Automático , Estudios Retrospectivos
3.
Eur J Emerg Med ; 30(6): 408-416, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37578440

RESUMEN

AIMS: Patient admission is a decision relying on sparsely available data. This study aims to provide prediction models for discharge versus admission for ward observation or intensive care, and 30 day-mortality for patients triaged with the Manchester Triage System. METHODS: This is a single-centre, observational, retrospective cohort study from data within ten minutes of patient presentation at the interdisciplinary emergency department of the Kepler University Hospital, Linz, Austria. We trained machine learning models including Random Forests and Neural Networks individually to predict discharge versus ward observation or intensive care admission, and 30 day-mortality. For analysis of the features' relevance, we used permutation feature importance. RESULTS: A total of 58323 adult patients between 1 December 2015 and 31 August 2020 were included. Neural Networks and Random Forests predicted admission to ward observation with an AUC-ROC of 0.842 ±â€…0.00 with the most important features being age and chief complaint. For admission to intensive care, the models had an AUC-ROC of 0.819 ±â€…0.002 with the most important features being the Manchester Triage category and heart rate, and for the outcome 30 day-mortality an AUC-ROC of 0.925 ±â€…0.001. The most important features for the prediction of 30 day-mortality were age and general ward admission. CONCLUSION: Machine learning can provide prediction on discharge versus admission to general wards and intensive care and inform about risk on 30 day-mortality for patients in the emergency department.


Asunto(s)
Hospitalización , Triaje , Adulto , Humanos , Estudios Retrospectivos , Servicio de Urgencia en Hospital , Aprendizaje Automático
4.
JMIR Med Inform ; 10(10): e38557, 2022 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-36269654

RESUMEN

Electronic health records (EHRs) have been successfully used in data science and machine learning projects. However, most of these data are collected for clinical use rather than for retrospective analysis. This means that researchers typically face many different issues when attempting to access and prepare the data for secondary use. We aimed to investigate how raw EHRs can be accessed and prepared in retrospective data science projects in a disciplined, effective, and efficient way. We report our experience and findings from a large-scale data science project analyzing routinely acquired retrospective data from the Kepler University Hospital in Linz, Austria. The project involved data collection from more than 150,000 patients over a period of 10 years. It included diverse data modalities, such as static demographic data, irregularly acquired laboratory test results, regularly sampled vital signs, and high-frequency physiological waveform signals. Raw medical data can be corrupted in many unexpected ways that demand thorough manual inspection and highly individualized data cleaning solutions. We present a general data preparation workflow, which was shaped in the course of our project and consists of the following 7 steps: obtain a rough overview of the available EHR data, define clinically meaningful labels for supervised learning, extract relevant data from the hospital's data warehouses, match data extracted from different sources, deidentify them, detect errors and inconsistencies therein through a careful exploratory analysis, and implement a suitable data processing pipeline in actual code. Only few of the data preparation issues encountered in our project were addressed by generic medical data preprocessing tools that have been proposed recently. Instead, highly individualized solutions for the specific data used in one's own research seem inevitable. We believe that the proposed workflow can serve as a guidance for practitioners, helping them to identify and address potential problems early and avoid some common pitfalls.

5.
J Med Syst ; 46(5): 23, 2022 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-35348909

RESUMEN

Many previous studies claim to have developed machine learning models that diagnose COVID-19 from blood tests. However, we hypothesize that changes in the underlying distribution of the data, so called domain shifts, affect the predictive performance and reliability and are a reason for the failure of such machine learning models in clinical application. Domain shifts can be caused, e.g., by changes in the disease prevalence (spreading or tested population), by refined RT-PCR testing procedures (way of taking samples, laboratory procedures), or by virus mutations. Therefore, machine learning models for diagnosing COVID-19 or other diseases may not be reliable and degrade in performance over time. We investigate whether domain shifts are present in COVID-19 datasets and how they affect machine learning methods. We further set out to estimate the mortality risk based on routinely acquired blood tests in a hospital setting throughout pandemics and under domain shifts. We reveal domain shifts by evaluating the models on a large-scale dataset with different assessment strategies, such as temporal validation. We present the novel finding that domain shifts strongly affect machine learning models for COVID-19 diagnosis and deteriorate their predictive performance and credibility. Therefore, frequent re-training and re-assessment are indispensable for robust models enabling clinical utility.


Asunto(s)
COVID-19 , COVID-19/diagnóstico , Prueba de COVID-19 , Pruebas Hematológicas , Humanos , Aprendizaje Automático , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...