Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Anesth Analg ; 138(3): 645-654, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38364244

RESUMEN

BACKGROUND: Transfusion of packed red blood cells (pRBCs) is still associated with risks. This study aims to determine whether renal function deterioration in the context of individual transfusions in individual patients can be predicted using machine learning. Recipient and donor characteristics linked to increased risk are identified. METHODS: This study was registered at ClinicalTrials.gov (NCT05466370) and was conducted after local ethics committee approval. We evaluated 3366 transfusion episodes from a university hospital between October 31, 2016, and August 31, 2020. Random forest models were tuned and trained via Python auto-sklearn package to predict acute kidney injury (AKI). The models included recipients' and donors' demographic parameters and laboratory values, donor questionnaire results, and the age of the pRBCs. Bootstrapping on the test dataset was used to calculate the means and standard deviations of various performance metrics. RESULTS: AKI as defined by a modified Kidney Disease Improving Global Outcomes (KDIGO) criterion developed after 17.4% transfusion episodes (base rate). AKI could be predicted with an area under the curve of the receiver operating characteristic (AUC-ROC) of 0.73 ± 0.02. The negative (NPV) and positive (PPV) predictive values were 0.90 ± 0.02 and 0.32 ± 0.03, respectively. Feature importance and relative risk analyses revealed that donor features were far less important than recipient features for predicting posttransfusion AKI. CONCLUSIONS: Surprisingly, only the recipients' characteristics played a decisive role in AKI prediction. Based on this result, we speculate that the selection of a specific pRBC may have less influence than recipient characteristics.


Asunto(s)
Lesión Renal Aguda , Riñón , Humanos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/etiología , Lesión Renal Aguda/terapia , Transfusión Sanguínea , Estudios Retrospectivos , Medición de Riesgo/métodos , Curva ROC
2.
Anesth Analg ; 135(3): 524-531, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35977362

RESUMEN

Machine learning (ML) and artificial intelligence (AI) are widely used in many different fields of modern medicine. This narrative review gives, in the first part, a brief overview of the methods of ML and AI used in patient blood management (PBM) and, in the second part, aims at describing which fields have been analyzed using these methods so far. A total of 442 articles were identified by a literature search, and 47 of them were judged as qualified articles that applied ML and AI techniques in PBM. We assembled the eligible articles to provide insights into the areas of application, quality measures of these studies, and treatment outcomes that can pave the way for further adoption of this promising technology and its possible use in routine clinical decision making. The topics that have been investigated most often were the prediction of transfusion (30%), bleeding (28%), and laboratory studies (15%). Although in the last 3 years a constantly increasing number of questions of ML in PBM have been investigated, there is a vast scientific potential for further application of ML and AI in other fields of PBM.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Toma de Decisiones Clínicas , Humanos
3.
Eur J Anaesthesiol ; 39(9): 766-773, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35852544

RESUMEN

BACKGROUND: Massive perioperative allogeneic blood transfusion, that is, perioperative transfusion of more than 10 units of packed red blood cells (pRBC), is one of the main contributors to perioperative morbidity and mortality in cardiac surgery. Prediction of perioperative blood transfusion might enable preemptive treatment strategies to reduce risk and improve patient outcomes while reducing resource utilisation. We, therefore, investigated the precision of five different machine learning algorithms to predict the occurrence of massive perioperative allogeneic blood transfusion in cardiac surgery at our centre. OBJECTIVE: Is it possible to predict massive perioperative allogeneic blood transfusion using machine learning? DESIGN: Retrospective, observational study. SETTING: Single adult cardiac surgery centre in Austria between 01 January 2010 and 31 December 2019. PATIENTS: Patients undergoing cardiac surgery. MAIN OUTCOME MEASURES: Primary outcome measures were the number of patients receiving at least 10 units pRBC, the area under the curve for the receiver operating characteristics curve, the F1 score, and the negative-predictive (NPV) and positive-predictive values (PPV) of the five machine learning algorithms used to predict massive perioperative allogeneic blood transfusion. RESULTS: A total of 3782 (1124 female:) patients were enrolled and 139 received at least 10 pRBC units. Using all features available at hospital admission, massive perioperative allogeneic blood transfusion could be excluded rather accurately. The best area under the curve was achieved by Random Forests: 0.810 (0.76 to 0.86) with high NPV of 0.99). This was still true using only the eight most important features [area under the curve 0.800 (0.75 to 0.85)]. CONCLUSION: Machine learning models may provide clinical decision support as to which patients to focus on for perioperative preventive treatment in order to preemptively reduce massive perioperative allogeneic blood transfusion by predicting, which patients are not at risk. TRIAL REGISTRATION: Johannes Kepler University Ethics Committee Study Number 1091/2021, Clinicaltrials.gov identifier NCT04856618.


Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Trasplante de Células Madre Hematopoyéticas , Adulto , Transfusión Sanguínea , Procedimientos Quirúrgicos Cardíacos/efectos adversos , Femenino , Humanos , Aprendizaje Automático , Estudios Retrospectivos
4.
J Med Syst ; 46(5): 23, 2022 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-35348909

RESUMEN

Many previous studies claim to have developed machine learning models that diagnose COVID-19 from blood tests. However, we hypothesize that changes in the underlying distribution of the data, so called domain shifts, affect the predictive performance and reliability and are a reason for the failure of such machine learning models in clinical application. Domain shifts can be caused, e.g., by changes in the disease prevalence (spreading or tested population), by refined RT-PCR testing procedures (way of taking samples, laboratory procedures), or by virus mutations. Therefore, machine learning models for diagnosing COVID-19 or other diseases may not be reliable and degrade in performance over time. We investigate whether domain shifts are present in COVID-19 datasets and how they affect machine learning methods. We further set out to estimate the mortality risk based on routinely acquired blood tests in a hospital setting throughout pandemics and under domain shifts. We reveal domain shifts by evaluating the models on a large-scale dataset with different assessment strategies, such as temporal validation. We present the novel finding that domain shifts strongly affect machine learning models for COVID-19 diagnosis and deteriorate their predictive performance and credibility. Therefore, frequent re-training and re-assessment are indispensable for robust models enabling clinical utility.


Asunto(s)
COVID-19 , COVID-19/diagnóstico , Prueba de COVID-19 , Pruebas Hematológicas , Humanos , Aprendizaje Automático , Reproducibilidad de los Resultados
5.
Wien Med Wochenschr ; 172(9-10): 211-219, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34185216

RESUMEN

BACKGROUND: In December 2019, the new virus infection coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged. Simple clinical risk scores may improve the management of COVID-19 patients. Therefore, the aim of this pilot study was to evaluate the quick Sequential Organ Failure Assessment (qSOFA) score, which is well established for other diseases, as an early risk assessment tool predicting a severe course of COVID-19. METHODS: We retrospectively analyzed data from adult COVID-19 patients hospitalized between March and July 2020. A critical disease progress was defined as admission to intensive care unit (ICU) or death. RESULTS: Of 64 COVID-19 patients, 33% (21/64) had a critical disease progression from which 13 patients had to be transferred to ICU. The COVID-19-associated mortality rate was 20%, increasing to 39% after ICU admission. All patients without a critical progress had a qSOFA score ≤ 1 at admission. Patients with a critical progress had in only 14% (3/21) and in 20% (3/15) of cases a qSOFA score ≥ 2 at admission (p = 0.023) or when measured directly before critical progression, respectively, while 95% (20/21) of patients with critical progress had an impairment oxygen saturation (SO2) at admission time requiring oxygen supplementation. CONCLUSION: A low qSOFA score cannot be used to assume short-term stable or noncritical disease status in COVID-19.


Asunto(s)
COVID-19 , Sepsis , Adulto , COVID-19/diagnóstico , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Puntuaciones en la Disfunción de Órganos , Proyectos Piloto , Pronóstico , Estudios Retrospectivos , SARS-CoV-2
6.
Crit Care ; 25(1): 175, 2021 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-34034782

RESUMEN

BACKGROUND: Uncertainty about the optimal respiratory support strategies in critically ill COVID-19 patients is widespread. While the risks and benefits of noninvasive techniques versus early invasive mechanical ventilation (IMV) are intensely debated, actual evidence is lacking. We sought to assess the risks and benefits of different respiratory support strategies, employed in intensive care units during the first months of the COVID-19 pandemic on intubation and intensive care unit (ICU) mortality rates. METHODS: Subanalysis of a prospective, multinational registry of critically ill COVID-19 patients. Patients were subclassified into standard oxygen therapy ≥10 L/min (SOT), high-flow oxygen therapy (HFNC), noninvasive positive-pressure ventilation (NIV), and early IMV, according to the respiratory support strategy employed at the day of admission to ICU. Propensity score matching was performed to ensure comparability between groups. RESULTS: Initially, 1421 patients were assessed for possible study inclusion. Of these, 351 patients (85 SOT, 87 HFNC, 87 NIV, and 92 IMV) remained eligible for full analysis after propensity score matching. 55% of patients initially receiving noninvasive respiratory support required IMV. The intubation rate was lower in patients initially ventilated with HFNC and NIV compared to those who received SOT (SOT: 64%, HFNC: 52%, NIV: 49%, p = 0.025). Compared to the other respiratory support strategies, NIV was associated with a higher overall ICU mortality (SOT: 18%, HFNC: 20%, NIV: 37%, IMV: 25%, p = 0.016). CONCLUSION: In this cohort of critically ill patients with COVID-19, a trial of HFNC appeared to be the most balanced initial respiratory support strategy, given the reduced intubation rate and comparable ICU mortality rate. Nonetheless, considering the uncertainty and stress associated with the COVID-19 pandemic, SOT and early IMV represented safe initial respiratory support strategies. The presented findings, in agreement with classic ARDS literature, suggest that NIV should be avoided whenever possible due to the elevated ICU mortality risk.


Asunto(s)
COVID-19/terapia , Enfermedad Crítica/terapia , Terapia Respiratoria/métodos , Terapia Respiratoria/estadística & datos numéricos , Anciano , COVID-19/mortalidad , Enfermedad Crítica/mortalidad , Progresión de la Enfermedad , Femenino , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Sistema de Registros , Estudios Retrospectivos , Factores de Tiempo , Resultado del Tratamiento
7.
Transfusion ; 60(9): 1977-1986, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32596877

RESUMEN

BACKGROUND: The ability to predict transfusions arising during hospital admission might enable economized blood supply management and might furthermore increase patient safety by ensuring a sufficient stock of red blood cells (RBCs) for a specific patient. We therefore investigated the precision of four different machine learning-based prediction algorithms to predict transfusion, massive transfusion, and the number of transfusions in patients admitted to a hospital. STUDY DESIGN AND METHODS: This was a retrospective, observational study in three adult tertiary care hospitals in Western Australia between January 2008 and June 2017. Primary outcome measures for the classification tasks were the area under the curve for the receiver operating characteristics curve, the F1 score, and the average precision of the four machine learning algorithms used: neural networks (NNs), logistic regression (LR), random forests (RFs), and gradient boosting (GB) trees. RESULTS: Using our four predictive models, transfusion of at least 1 unit of RBCs could be predicted rather accurately (sensitivity for NN, LR, RF, and GB: 0.898, 0.894, 0.584, and 0.872, respectively; specificity: 0.958, 0.966, 0.964, 0.965). Using the four methods for prediction of massive transfusion was less successful (sensitivity for NN, LR, RF, and GB: 0.780, 0.721, 0.002, and 0.797, respectively; specificity: 0.994, 0.995, 0.993, 0.995). As a consequence, prediction of the total number of packed RBCs transfused was also rather inaccurate. CONCLUSION: This study demonstrates that the necessity for intrahospital transfusion can be forecasted reliably, however the amount of RBC units transfused during a hospital stay is more difficult to predict.


Asunto(s)
Toma de Decisiones Asistida por Computador , Hospitalización , Aprendizaje Automático , Adulto , Transfusión Sanguínea , Femenino , Humanos , Masculino , Valor Predictivo de las Pruebas , Estudios Retrospectivos , Australia Occidental
8.
Wien Klin Wochenschr ; 2024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38755419

RESUMEN

Critical illness is an exquisitely time-sensitive condition and follows a disease continuum, which always starts before admission to the intensive care unit (ICU), in the majority of cases even before hospital admission. Reflecting the common practice in many healthcare systems that critical care is mainly provided in the confined areas of an ICU, any delay in ICU admission of critically ill patients is associated with increased morbidity and mortality. However, if appropriate critical care interventions are provided before ICU admission, this association is not observed. Emergency critical care refers to critical care provided outside of the ICU. It encompasses the delivery of critical care interventions to and monitoring of patients at the place and time closest to the onset of critical illness as well as during transfer to the ICU. Thus, emergency critical care covers the most time-sensitive phase of critical illness and constitutes one missing link in the chain of survival of the critically ill patient. Emergency critical care is delivered whenever and wherever critical illness occurs such as in the pre-hospital setting, before and during inter-hospital transfers of critically ill patients, in the emergency department, in the operating theatres, and on hospital wards. By closing the management gap between onset of critical illness and ICU admission, emergency critical care improves patient safety and can avoid early deaths, reverse mild-to-moderate critical illness, avoid ICU admission, attenuate the severity of organ dysfunction, shorten ICU length of stay, and reduce short- and long-term mortality of critically ill patients. Future research is needed to identify effective models to implement emergency critical care systems in different healthcare systems.

9.
J Clin Med ; 12(2)2023 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-36675651

RESUMEN

The first clinical impression of emergency patients conveys a myriad of information that has been incompletely elucidated. In this prospective, observational study, the value of the first clinical impression, assessed by 18 observations, to predict the need for timely medical attention, the need for hospital admission, and in-hospital mortality in 1506 adult patients presenting to the triage desk of an emergency department was determined. Machine learning models were used for statistical analysis. The first clinical impression could predict the need for timely medical attention [area under the receiver operating characteristic curve (AUC ROC), 0.73; p = 0.01] and hospital admission (AUC ROC, 0.8; p = 0.004), but not in-hospital mortality (AUC ROC, 0.72; p = 0.13). The five most important features informing the prediction models were age, ability to walk, admission by emergency medical services, lying on a stretcher, breathing pattern, and bringing a suitcase. The inability to walk at triage presentation was highly predictive of both the need for timely medical attention (p < 0.001) and the need for hospital admission (p < 0.001). In conclusion, the first clinical impression of emergency patients presenting to the triage desk can predict the need for timely medical attention and hospital admission. Important components of the first clinical impression were identified.

10.
Eur J Emerg Med ; 30(4): 252-259, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37115946

RESUMEN

Background and importance Guidelines recommend that hospital emergency teams locally validate criteria for termination of cardiopulmonary resuscitation in patients with in-hospital cardiac arrest (IHCA). Objective To determine the value of a machine learning algorithm to predict failure to achieve return of spontaneous circulation (ROSC) and unfavourable functional outcome from IHCA using only data readily available at emergency team arrival. Design Retrospective cohort study. Setting and participants Adults who experienced an IHCA were attended to by the emergency team. Outcome measures and analysis Demographic and clinical data typically available at the arrival of the emergency team were extracted from the institutional IHCA database. In addition, outcome data including the Cerebral Performance Category (CPC) score count at hospital discharge were collected. A model selection procedure for random forests with a hyperparameter search was employed to develop two classification algorithms to predict failure to achieve ROSC and unfavourable (CPC 3-5) functional outcomes. Main results Six hundred thirty patients were included, of which 390 failed to achieve ROSC (61.9%). The final classification model to predict failure to achieve ROSC had an area under the receiver operating characteristic curve of 0.9 [95% confidence interval (CI), 0.89-0.9], a balanced accuracy of 0.77 (95% CI, 0.75-0.79), an F1-score of 0.78 (95% CI, 0.76-0.79), a positive predictive value of 0.88 (0.86-0.91), a negative predictive value of 0.61 (0.6-0.63), a sensitivity of 0.69 (0.66-0.72), and a specificity of 0.84 (0.8-0.88). Five hundred fifty-nine subjects experienced an unfavourable outcome (88.7%). The final classification model to predict unfavourable functional outcomes from IHCA at hospital discharge had an area under the receiver operating characteristic curve of 0.93 (95% CI, 0.92-0.93), a balanced accuracy of 0.59 (95% CI, 0.57-0.61), an F1-score of 0.94 (95% CI, 0.94-0.95), a positive predictive value of 0.91 (0.9-0.91), a negative predictive value of 0.57 (0.48-0.66), a sensitivity of 0.98 (0.97-0.99), and a specificity of 0.2 (0.16-0.24). Conclusion Using data readily available at emergency team arrival, machine learning algorithms had a high predictive power to forecast failure to achieve ROSC and unfavourable functional outcomes from IHCA while cardiopulmonary resuscitation was still ongoing; however, the positive predictive value of both models was not high enough to allow for early termination of resuscitation efforts.


Asunto(s)
Reanimación Cardiopulmonar , Paro Cardíaco , Adulto , Humanos , Estudios Retrospectivos , Paro Cardíaco/diagnóstico , Paro Cardíaco/terapia , Reanimación Cardiopulmonar/métodos , Algoritmos , Hospitales
11.
Eur J Emerg Med ; 30(6): 408-416, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37578440

RESUMEN

AIMS: Patient admission is a decision relying on sparsely available data. This study aims to provide prediction models for discharge versus admission for ward observation or intensive care, and 30 day-mortality for patients triaged with the Manchester Triage System. METHODS: This is a single-centre, observational, retrospective cohort study from data within ten minutes of patient presentation at the interdisciplinary emergency department of the Kepler University Hospital, Linz, Austria. We trained machine learning models including Random Forests and Neural Networks individually to predict discharge versus ward observation or intensive care admission, and 30 day-mortality. For analysis of the features' relevance, we used permutation feature importance. RESULTS: A total of 58323 adult patients between 1 December 2015 and 31 August 2020 were included. Neural Networks and Random Forests predicted admission to ward observation with an AUC-ROC of 0.842 ±â€…0.00 with the most important features being age and chief complaint. For admission to intensive care, the models had an AUC-ROC of 0.819 ±â€…0.002 with the most important features being the Manchester Triage category and heart rate, and for the outcome 30 day-mortality an AUC-ROC of 0.925 ±â€…0.001. The most important features for the prediction of 30 day-mortality were age and general ward admission. CONCLUSION: Machine learning can provide prediction on discharge versus admission to general wards and intensive care and inform about risk on 30 day-mortality for patients in the emergency department.


Asunto(s)
Hospitalización , Triaje , Adulto , Humanos , Estudios Retrospectivos , Servicio de Urgencia en Hospital , Aprendizaje Automático
12.
Comput Biol Med ; 150: 106086, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36191392

RESUMEN

There have been several attempts to quantify the diagnostic distortion caused by algorithms that perform low-dimensional electrocardiogram (ECG) representation. However, there is no universally accepted quantitative measure that allows the diagnostic distortion arising from denoising, compression, and ECG beat representation algorithms to be determined. Hence, the main objective of this work was to develop a framework to enable biomedical engineers to efficiently and reliably assess diagnostic distortion resulting from ECG processing algorithms. We propose a semiautomatic framework for quantifying the diagnostic resemblance between original and denoised/reconstructed ECGs. Evaluation of the ECG must be done manually, but is kept simple and does not require medical training. In a case study, we quantified the agreement between raw and reconstructed (denoised) ECG recordings by means of kappa-based statistical tests. The proposed methodology takes into account that the observers may agree by chance alone. Consequently, for the case study, our statistical analysis reports the "true", beyond-chance agreement in contrast to other, less robust measures, such as simple percent agreement calculations. Our framework allows efficient assessment of clinically important diagnostic distortion, a potential side effect of ECG (pre-)processing algorithms. Accurate quantification of a possible diagnostic loss is critical to any subsequent ECG signal analysis, for instance, the detection of ischemic ST episodes in long-term ECG recordings.


Asunto(s)
Compresión de Datos , Procesamiento de Señales Asistido por Computador , Electrocardiografía/métodos , Algoritmos , Ingeniería Biomédica
13.
J Patient Saf ; 18(5): 494-498, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-35026794

RESUMEN

OBJECTIVES: The ability to predict in-hospital mortality from data available at hospital admission would identify patients at risk and thereby assist hospital-wide patient safety initiatives. Our aim was to use modern machine learning tools to predict in-hospital mortality from standardized data sets available at hospital admission. METHODS: This was a retrospective, observational study in 3 adult tertiary care hospitals in Western Australia between January 2008 and June 2017. Primary outcome measures were the area under the curve for the receiver operating characteristics curve, the F1 score, and the average precision of the 4 machine learning algorithms used: logistic regression, neural networks, random forests, and gradient boosting trees. RESULTS: Using our 4 predictive models, in-hospital mortality could be predicted satisfactorily (areas under the curve for neural networks, logistic regression, random forests, and gradient boosting trees: 0.932, 0.936, 0.935, and 0.935, respectively), with moderate F1 scores: 0.378, 0.367, 0.380, and 0.380, respectively. Average precision values were 0.312, 0.321, 0.334, and 0.323, respectively. It remains unknown whether additional features might improve our models; however, this would result in additional efforts for data acquisition in daily clinical practice. CONCLUSIONS: This study demonstrates that using only a limited, standardized data set in-hospital mortality can be predicted satisfactorily at the time point of hospital admission. More parameters describing patient's health are likely needed to improve our model.


Asunto(s)
Hospitalización , Aprendizaje Automático , Adulto , Mortalidad Hospitalaria , Hospitales , Humanos , Estudios Retrospectivos , Medición de Riesgo
14.
JMIR Med Inform ; 10(10): e38557, 2022 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-36269654

RESUMEN

Electronic health records (EHRs) have been successfully used in data science and machine learning projects. However, most of these data are collected for clinical use rather than for retrospective analysis. This means that researchers typically face many different issues when attempting to access and prepare the data for secondary use. We aimed to investigate how raw EHRs can be accessed and prepared in retrospective data science projects in a disciplined, effective, and efficient way. We report our experience and findings from a large-scale data science project analyzing routinely acquired retrospective data from the Kepler University Hospital in Linz, Austria. The project involved data collection from more than 150,000 patients over a period of 10 years. It included diverse data modalities, such as static demographic data, irregularly acquired laboratory test results, regularly sampled vital signs, and high-frequency physiological waveform signals. Raw medical data can be corrupted in many unexpected ways that demand thorough manual inspection and highly individualized data cleaning solutions. We present a general data preparation workflow, which was shaped in the course of our project and consists of the following 7 steps: obtain a rough overview of the available EHR data, define clinically meaningful labels for supervised learning, extract relevant data from the hospital's data warehouses, match data extracted from different sources, deidentify them, detect errors and inconsistencies therein through a careful exploratory analysis, and implement a suitable data processing pipeline in actual code. Only few of the data preparation issues encountered in our project were addressed by generic medical data preprocessing tools that have been proposed recently. Instead, highly individualized solutions for the specific data used in one's own research seem inevitable. We believe that the proposed workflow can serve as a guidance for practitioners, helping them to identify and address potential problems early and avoid some common pitfalls.

15.
Lab Med ; 52(2): 146-149, 2021 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-33340312

RESUMEN

OBJECTIVE: The diagnosis of COVID-19 is based on the detection of SARS-CoV-2 in respiratory secretions, blood, or stool. Currently, reverse transcription polymerase chain reaction (RT-PCR) is the most commonly used method to test for SARS-CoV-2. METHODS: In this retrospective cohort analysis, we evaluated whether machine learning could exclude SARS-CoV-2 infection using routinely available laboratory values. A Random Forests algorithm with 28 unique features was trained to predict the RT-PCR results. RESULTS: Out of 12,848 patients undergoing SARS-CoV-2 testing, routine blood tests were simultaneously performed in 1357 patients. The machine learning model could predict SARS-CoV-2 test results with an accuracy of 86% and an area under the receiver operating characteristic curve of 0.74. CONCLUSION: Machine learning methods can reliably predict a negative SARS-CoV-2 RT-PCR test result using standard blood tests.


Asunto(s)
COVID-19/sangre , Aprendizaje Automático , Adulto , Anciano , Anciano de 80 o más Años , Prueba de Ácido Nucleico para COVID-19 , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , SARS-CoV-2/aislamiento & purificación , Sensibilidad y Especificidad
16.
ESC Heart Fail ; 8(1): 37-46, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33350605

RESUMEN

AIMS: COVID-19, a respiratory viral disease causing severe pneumonia, also affects the heart and other organs. Whether its cardiac involvement is a specific feature consisting of myocarditis, or simply due to microvascular injury and systemic inflammation, is yet unclear and presently debated. Because myocardial injury is also common in other kinds of pneumonias, we investigated and compared such occurrence in severe pneumonias due to COVID-19 and other causes. METHODS AND RESULTS: We analysed data from 156 critically ill patients requiring mechanical ventilation in four European tertiary hospitals, including all n = 76 COVID-19 patients with severe disease course requiring at least ventilatory support, matched to n = 76 from a retrospective consecutive patient cohort of severe pneumonias of other origin (matched for age, gender, and type of ventilator therapy). When compared to the non-COVID-19, mortality (COVID-19 = 38.2% vs. non-COVID-19 = 51.3%, P = 0.142) and impairment of systolic function were not significantly different. Surprisingly, myocardial injury was even more frequent in non-COVID-19 (96.4% vs. 78.1% P = 0.004). Although inflammatory activity [C-reactive protein (CRP) and interleukin-6] was indifferent, d-dimer and thromboembolic incidence (COVID-19 = 23.7% vs. non-COVID-19 = 5.3%, P = 0.002) driven by pulmonary embolism rates (COVID-19 = 17.1% vs. non-COVID-19 = 2.6%, P = 0.005) were higher. CONCLUSIONS: Myocardial injury was frequent in severe COVID-19 requiring mechanical ventilation, but still less frequent than in similarly severe pneumonias of other origin, indicating that cardiac involvement may not be a specific feature of COVID-19. While mortality was also similar, COVID-19 is characterized with increased thrombogenicity and high pulmonary embolism rates.


Asunto(s)
COVID-19/complicaciones , Cardiomiopatías/etiología , Enfermedad Aguda , Anciano , COVID-19/mortalidad , COVID-19/terapia , Cardiomiopatías/mortalidad , Estudios de Casos y Controles , Femenino , Humanos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Tiempo de Internación/estadística & datos numéricos , Masculino , Miocarditis/etiología , Miocarditis/mortalidad , Neumonía/complicaciones , Respiración Artificial , Estudios Retrospectivos , Centros de Atención Terciaria
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA