Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Pancreatology ; 22(1): 43-50, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34690046

RESUMO

BACKGROUND: Acute pancreatitis (AP) is one of the most common causes of gastrointestinal-related hospitalizations in the United States. Severe AP (SAP) is associated with a mortality rate of nearly 30% and is distinguished from milder forms of AP. Risk stratification to identify SAP cases needing inpatient treatment is an important aspect of AP diagnosis. METHODS: We developed machine learning algorithms to predict which patients presenting with AP would require treatment for SAP. Three models were developed using logistic regression, neural networks, and XGBoost. Models were assessed in terms of area under the receiver operating characteristic (AUROC) and compared to the Harmless Acute Pancreatitis Score (HAPS) and Bedside Index for Severity in Acute Pancreatitis (BISAP) scores for AP risk stratification. RESULTS: 61,894 patients were used to train and test the machine learning models. With an AUROC value of 0.921, the model developed using XGBoost outperformed the logistic regression and neural network-based models. The XGBoost model also achieved a higher AUROC than both HAPS and BISAP for identifying patients who would be diagnosed with SAP. CONCLUSIONS: Machine learning may be able to improve the accuracy of AP risk stratification methods and allow for more timely treatment and initiation of interventions.


Assuntos
Aprendizado de Máquina , Pancreatite/diagnóstico , Doença Aguda , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Curva ROC , Estudos Retrospectivos , Índice de Gravidade de Doença
2.
BMC Med Inform Decis Mak ; 20(1): 276, 2020 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-33109167

RESUMO

BACKGROUND: Severe sepsis and septic shock are among the leading causes of death in the United States and sepsis remains one of the most expensive conditions to diagnose and treat. Accurate early diagnosis and treatment can reduce the risk of adverse patient outcomes, but the efficacy of traditional rule-based screening methods is limited. The purpose of this study was to develop and validate a machine learning algorithm (MLA) for severe sepsis prediction up to 48 h before onset using a diverse patient dataset. METHODS: Retrospective analysis was performed on datasets composed of de-identified electronic health records collected between 2001 and 2017, including 510,497 inpatient and emergency encounters from 461 health centers collected between 2001 and 2015, and 20,647 inpatient and emergency encounters collected in 2017 from a community hospital. MLA performance was compared to commonly used disease severity scoring systems and was evaluated at 0, 4, 6, 12, 24, and 48 h prior to severe sepsis onset. RESULTS: 270,438 patients were included in analysis. At time of onset, the MLA demonstrated an AUROC of 0.931 (95% CI 0.914, 0.948) and a diagnostic odds ratio (DOR) of 53.105 on a testing dataset, exceeding MEWS (0.725, P < .001; DOR 4.358), SOFA (0.716; P < .001; DOR 3.720), and SIRS (0.655; P < .001; DOR 3.290). For prediction 48 h prior to onset, the MLA achieved an AUROC of 0.827 (95% CI 0.806, 0.848) on a testing dataset. On an external validation dataset, the MLA achieved an AUROC of 0.948 (95% CI 0.942, 0.954) at the time of onset, and 0.752 at 48 h prior to onset. CONCLUSIONS: The MLA accurately predicts severe sepsis onset up to 48 h in advance using only readily available vital signs extracted from the existing patient electronic health records. Relevant implications for clinical practice include improved patient outcomes from early severe sepsis detection and treatment.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Aprendizado de Máquina/normas , Sepse/diagnóstico , Algoritmos , Conjuntos de Dados como Assunto , Feminino , Previsões , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Masculino , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sepse/mortalidade , Índice de Gravidade de Doença , Fatores de Tempo , Tempo para o Tratamento
3.
Nat Med ; 12(7): 793-800, 2006 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16799557

RESUMO

Vascular endothelial growth factor (VEGF) exerts crucial functions during pathological angiogenesis and normal physiology. We observed increased hematocrit (60-75%) after high-grade inhibition of VEGF by diverse methods, including adenoviral expression of soluble VEGF receptor (VEGFR) ectodomains, recombinant VEGF Trap protein and the VEGFR2-selective antibody DC101. Increased production of red blood cells (erythrocytosis) occurred in both mouse and primate models, and was associated with near-complete neutralization of VEGF corneal micropocket angiogenesis. High-grade inhibition of VEGF induced hepatic synthesis of erythropoietin (Epo, encoded by Epo) >40-fold through a HIF-1alpha-independent mechanism, in parallel with suppression of renal Epo mRNA. Studies using hepatocyte-specific deletion of the Vegfa gene and hepatocyte-endothelial cell cocultures indicated that blockade of VEGF induced hepatic Epo by interfering with homeostatic VEGFR2-dependent paracrine signaling involving interactions between hepatocytes and endothelial cells. These data indicate that VEGF is a previously unsuspected negative regulator of hepatic Epo synthesis and erythropoiesis and suggest that levels of Epo and erythrocytosis could represent noninvasive surrogate markers for stringent blockade of VEGF in vivo.


Assuntos
Eritropoetina/fisiologia , Fígado/fisiologia , Fator A de Crescimento do Endotélio Vascular/fisiologia , Animais , Hematócrito , Subunidade alfa do Fator 1 Induzível por Hipóxia/genética , Subunidade alfa do Fator 1 Induzível por Hipóxia/fisiologia , Camundongos , Camundongos Endogâmicos C57BL , Camundongos SCID , Camundongos Transgênicos , Modelos Animais , Policitemia/fisiopatologia , Receptores de Fatores de Crescimento do Endotélio Vascular/fisiologia , Vasos Retinianos/fisiologia
4.
Dev Dyn ; 240(6): 1412-21, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21520329

RESUMO

Angiogenesis is a highly organized process under the control of guidance cues that direct endothelial cell (EC) migration. Recently, many molecules that were initially described as regulators of neural guidance were subsequently shown to also direct EC migration. Here, we report a novel protein, thrombospondin type I domain containing 7A (Thsd7a), that is a neural molecule required for directed EC migration during embryonic angiogenesis in zebrafish. Thsd7a is a vertebrate conserved protein. Zebrafish thsd7a transcript was detected along the ventral edge of the neural tube in the developing zebrafish embryos, correlating with the growth path of angiogenic intersegmental vessels (ISVs). Morpholino-knockdown of Thsd7a caused a lateral deviation of angiogenic ECs below the thsd7a-expressing sites, resulting in aberrant ISV patterning. Collectively, our study shows that zebrafish Thsd7a is a neural protein required for ISV angiogenesis, and suggests an important role of Thsd7a in the neurovascular interaction during zebrafish development.


Assuntos
Vasos Sanguíneos/embriologia , Padronização Corporal/genética , Neovascularização Fisiológica/genética , Trombospondinas/fisiologia , Proteínas de Peixe-Zebra/fisiologia , Peixe-Zebra/embriologia , Sequência de Aminoácidos , Animais , Animais Geneticamente Modificados , Vasos Sanguíneos/metabolismo , Sistema Nervoso Central/embriologia , Sistema Nervoso Central/metabolismo , Embrião não Mamífero , Dados de Sequência Molecular , Neovascularização Fisiológica/fisiologia , Proteínas do Tecido Nervoso/genética , Proteínas do Tecido Nervoso/metabolismo , Proteínas do Tecido Nervoso/fisiologia , Filogenia , Homologia de Sequência de Aminoácidos , Trombospondinas/genética , Trombospondinas/metabolismo , Peixe-Zebra/genética , Peixe-Zebra/metabolismo , Proteínas de Peixe-Zebra/genética , Proteínas de Peixe-Zebra/metabolismo
5.
Pulm Circ ; 12(1): e12013, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35506114

RESUMO

Background: Pulmonary embolisms (PE) are life-threatening medical events, and early identification of patients experiencing a PE is essential to optimizing patient outcomes. Current tools for risk stratification of PE patients are limited and unable to predict PE events before their occurrence. Objective: We developed a machine learning algorithm (MLA) designed to identify patients at risk of PE before the clinical detection of onset in an inpatient population. Materials and Methods: Three machine learning (ML) models were developed on electronic health record data from 63,798 medical and surgical inpatients in a large US medical center. These models included logistic regression, neural network, and gradient boosted tree (XGBoost) models. All models used only routinely collected demographic, clinical, and laboratory information as inputs. All were evaluated for their ability to predict PE at the first time patient vital signs and lab measures required for the MLA to run were available. Performance was assessed with regard to the area under the receiver operating characteristic (AUROC), sensitivity, and specificity. Results: The model trained using XGBoost demonstrated the strongest performance for predicting PEs. The XGBoost model obtained an AUROC of 0.85, a sensitivity of 81%, and a specificity of 70%. The neural network and logistic regression models obtained AUROCs of 0.74 and 0.67, sensitivity of 81% and 81%, and specificity of 44% and 35%, respectively. Conclusions: This algorithm may improve patient outcomes through earlier recognition and prediction of PE, enabling earlier diagnosis and treatment of PE.

6.
Artigo em Inglês | MEDLINE | ID: mdl-35046014

RESUMO

INTRODUCTION: Diabetic kidney disease (DKD) accounts for the majority of increased risk of mortality for patients with diabetes, and eventually manifests in approximately half of those patients diagnosed with type 2 diabetes mellitus (T2DM). Although increased screening frequency can avoid delayed diagnoses, this is not uniformly implemented. The purpose of this study was to develop and retrospectively validate a machine learning algorithm (MLA) that predicts stages of DKD within 5 years upon diagnosis of T2DM. RESEARCH DESIGN AND METHODS: Two MLAs were trained to predict stages of DKD severity, and compared with the Centers for Disease Control and Prevention (CDC) risk score to evaluate performance. The models were validated on a hold-out test set as well as an external dataset sourced from separate facilities. RESULTS: The MLAs outperformed the CDC risk score in both the hold-out test and external datasets. Our algorithms achieved an area under the receiver operating characteristic curve (AUROC) of 0.75 on the hold-out set for prediction of any-stage DKD and an AUROC of over 0.82 for more severe endpoints, compared with the CDC risk score with an AUROC <0.70 on all test sets and endpoints. CONCLUSION: This retrospective study shows that an MLA can provide timely predictions of DKD among patients with recently diagnosed T2DM.


Assuntos
Diabetes Mellitus Tipo 2 , Nefropatias Diabéticas , Algoritmos , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/diagnóstico , Nefropatias Diabéticas/diagnóstico , Nefropatias Diabéticas/etiologia , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , Estados Unidos
7.
Am J Infect Control ; 50(4): 440-445, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34428529

RESUMO

BACKGROUND: Central line-associated bloodstream infections (CLABSIs) are associated with significant morbidity, mortality, and increased healthcare costs. Despite the high prevalence of CLABSIs in the U.S., there are currently no tools to stratify a patient's risk of developing an infection as the result of central line placement. To this end, we have developed and validated a machine learning algorithm (MLA) that can predict a patient's likelihood of developing CLABSI using only electronic health record data in order to provide clinical decision support. METHODS: We created three machine learning models to retrospectively analyze electronic health record data from 27,619 patient encounters. The models were trained and validated using an 80:20 split for the train and test data. Patients designated as having a central line procedure based on International Statistical Classification of Diseases and Related Health Problems 10 codes were included. RESULTS: XGBoost was the highest performing MLA out of the three models, obtaining an AUROC of 0.762 for CLABSI risk prediction at 48 hours after the recorded time for central line placement. CONCLUSIONS: Our results demonstrate that MLAs may be effective clinical decision support tools for assessment of CLABSI risk and should be explored further for this purpose.


Assuntos
Infecções Relacionadas a Cateter , Cateterismo Venoso Central , Cateteres Venosos Centrais , Sepse , Infecções Relacionadas a Cateter/diagnóstico , Infecções Relacionadas a Cateter/epidemiologia , Cateteres Venosos Centrais/efeitos adversos , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , Sepse/diagnóstico , Sepse/epidemiologia
8.
Am J Infect Control ; 50(3): 250-257, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35067382

RESUMO

BACKGROUND: Interventions to better prevent or manage Clostridioides difficile infection (CDI) may significantly reduce morbidity, mortality, and healthcare spending. METHODS: We present a retrospective study using electronic health record data from over 700 United States hospitals. A subset of hospitals was used to develop machine learning algorithms (MLAs); the remaining hospitals served as an external test set. Three MLAs were evaluated: gradient-boosted decision trees (XGBoost), Deep Long Short Term Memory neural network, and one-dimensional convolutional neural network. MLA performance was evaluated with area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, diagnostic odds ratios and likelihood ratios. RESULTS: The development dataset contained 13,664,840 inpatient encounters with 80,046 CDI encounters; the external dataset contained 1,149,088 inpatient encounters with 7,107 CDI encounters. The highest AUROCs were achieved for XGB, Deep Long Short Term Memory neural network, and one-dimensional convolutional neural network via abstaining from use of specialized training techniques, resampling in isolation, and resampling and output bias in combination, respectively. XGBoost achieved the highest AUROC. CONCLUSIONS: MLAs can predict future CDI in hospitalized patients using just 6 hours of data. In clinical practice, a machine-learning based tool may support prophylactic measures, earlier diagnosis, and more timely implementation of infection control measures.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/epidemiologia , Humanos , Aprendizado de Máquina , Curva ROC , Estudos Retrospectivos
9.
JMIR Aging ; 5(2): e35373, 2022 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-35363146

RESUMO

BACKGROUND: Short-term fall prediction models that use electronic health records (EHRs) may enable the implementation of dynamic care practices that specifically address changes in individualized fall risk within senior care facilities. OBJECTIVE: The aim of this study is to implement machine learning (ML) algorithms that use EHR data to predict a 3-month fall risk in residents from a variety of senior care facilities providing different levels of care. METHODS: This retrospective study obtained EHR data (2007-2021) from Juniper Communities' proprietary database of 2785 individuals primarily residing in skilled nursing facilities, independent living facilities, and assisted living facilities across the United States. We assessed the performance of 3 ML-based fall prediction models and the Juniper Communities' fall risk assessment. Additional analyses were conducted to examine how changes in the input features, training data sets, and prediction windows affected the performance of these models. RESULTS: The Extreme Gradient Boosting model exhibited the highest performance, with an area under the receiver operating characteristic curve of 0.846 (95% CI 0.794-0.894), specificity of 0.848, diagnostic odds ratio of 13.40, and sensitivity of 0.706, while achieving the best trade-off in balancing true positive and negative rates. The number of active medications was the most significant feature associated with fall risk, followed by a resident's number of active diseases and several variables associated with vital signs, including diastolic blood pressure and changes in weight and respiratory rates. The combination of vital signs with traditional risk factors as input features achieved higher prediction accuracy than using either group of features alone. CONCLUSIONS: This study shows that the Extreme Gradient Boosting technique can use a large number of features from EHR data to make short-term fall predictions with a better performance than that of conventional fall risk assessments and other ML models. The integration of routinely collected EHR data, particularly vital signs, into fall prediction models may generate more accurate fall risk surveillance than models without vital signs. Our data support the use of ML models for dynamic, cost-effective, and automated fall predictions in different types of senior care facilities.

10.
Am J Med Sci ; 364(1): 46-52, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35081403

RESUMO

BACKGROUND: The aim of the study was to quantify the relationship between acute kidney injury (AKI) and alcohol use disorder (AUD). METHODS: We used a large academic medical center and the MIMIC-III databases to quantify AKI disease and mortality burden as well as AKI disease progression in the AUD and non-AUD subpopulations. We used the MIMIC-III dataset to compare two different methods of encoding AKI: ICD-9 codes, and the Kidney Disease: Improving Global Outcomes scheme (KDIGO) definition. In addition to the AUD subpopulation, we also present analyses for the hepatorenal syndrome (HRS) and alcohol-related cirrhosis subpopulations identified via ICD-9/ICD-10 coding. RESULTS: In both the ICD-9 and KDIGO encodings of AKI, the AUD subpopulation had a higher incidence of AKI (ICD-9: 43.3% vs. 37.92% AKI in the non-AUD subpopulations; KDIGO: 48.65% vs. 40.53%) in the MIMIC-III dataset. In the academic dataset, the AUD subpopulation also had a higher incidence of AKI than the non-AUD subpopulation (ICD-9/ICD-10: 12.76% vs. 10.71%). The mortality rate of the subpopulation with both AKI and AUD, HRS, or alcohol-related cirrhosis was consistently higher than that of the subpopulation with only AKI in both datasets, including after adjusting for disease severity using two methods of severity estimation in the MIMIC-III dataset. Disease progression rates were similar for AUD and non-AUD subpopulations. CONCLUSIONS: Our work shows that the AUD patient subpopulation had a higher number of AKI patients than the non-AUD subpopulation, and that patients with both AKI and AUD, HRS, or alcohol-related cirrhosis had higher rates of mortality than the non-AUD subpopulation with AKI.


Assuntos
Injúria Renal Aguda , Alcoolismo , Síndrome Hepatorrenal , Injúria Renal Aguda/etiologia , Alcoolismo/complicações , Efeitos Psicossociais da Doença , Progressão da Doença , Mortalidade Hospitalar , Humanos , Cirrose Hepática/complicações , Estudos Retrospectivos
11.
BioData Min ; 14(1): 23, 2021 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-33789700

RESUMO

BACKGROUND: Acute heart failure (AHF) is associated with significant morbidity and mortality. Effective patient risk stratification is essential to guiding hospitalization decisions and the clinical management of AHF. Clinical decision support systems can be used to improve predictions of mortality made in emergency care settings for the purpose of AHF risk stratification. In this study, several models for the prediction of seven-day mortality among AHF patients were developed by applying machine learning techniques to retrospective patient data from 236,275 total emergency department (ED) encounters, 1881 of which were considered positive for AHF and were used for model training and testing. The models used varying subsets of age, sex, vital signs, and laboratory values. Model performance was compared to the Emergency Heart Failure Mortality Risk Grade (EHMRG) model, a commonly used system for prediction of seven-day mortality in the ED with similar (or, in some cases, more extensive) inputs. Model performance was assessed in terms of area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. RESULTS: When trained and tested on a large academic dataset, the best-performing model and EHMRG demonstrated test set AUROCs of 0.84 and 0.78, respectively, for prediction of seven-day mortality. Given only measurements of respiratory rate, temperature, mean arterial pressure, and FiO2, one model produced a test set AUROC of 0.83. Neither a logistic regression comparator nor a simple decision tree outperformed EHMRG. CONCLUSIONS: A model using only the measurements of four clinical variables outperforms EHMRG in the prediction of seven-day mortality in AHF. With these inputs, the model could not be replaced by logistic regression or reduced to a simple decision tree without significant performance loss. In ED settings, this minimal-input risk stratification tool may assist clinicians in making critical decisions about patient disposition by providing early and accurate insights into individual patient's risk profiles.

12.
Leuk Res ; 109: 106639, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34171604

RESUMO

BACKGROUND: Early myelodysplastic syndrome (MDS) diagnosis can allow physicians to provide early treatment, which may delay advancement of MDS and improve quality of life. However, MDS often goes unrecognized and is difficult to distinguish from other disorders. We developed a machine learning algorithm for the prediction of MDS one year prior to clinical diagnosis of the disease. METHODS: Retrospective analysis was performed on 790,470 patients over the age of 45 seen in the United States between 2007 and 2020. A gradient boosted decision tree model (XGB) was built to predict MDS diagnosis using vital signs, lab results, and demographics from the prior two years of patient data. The XGB model was compared to logistic regression (LR) and artificial neural network (ANN) models. The models did not use blast percentage and cytogenetics information as inputs. Predictions were made one year prior to MDS diagnosis as determined by International Classification of Diseases (ICD) codes, 9th and 10th revisions. Performance was assessed with regard to area under the receiver operating characteristic curve (AUROC). RESULTS: On a hold-out test set, the XGB model achieved an AUROC value of 0.87 for prediction of MDS one year prior to diagnosis, with a sensitivity of 0.79 and specificity of 0.80. The XGB model was compared against LR and ANN models, which achieved an AUROC of 0.838 and 0.832, respectively. CONCLUSIONS: Machine learning may allow for early MDS diagnosis MDS and more appropriate treatment administration.


Assuntos
Algoritmos , Aprendizado de Máquina , Síndromes Mielodisplásicas/diagnóstico , Redes Neurais de Computação , Qualidade de Vida , Medição de Risco/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Síndromes Mielodisplásicas/epidemiologia , Prognóstico , Curva ROC , Estudos Retrospectivos , Estados Unidos/epidemiologia
13.
Medicine (Baltimore) ; 100(23): e26246, 2021 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-34115013

RESUMO

ABSTRACT: Ventilator-associated pneumonia (VAP) is the most common and fatal nosocomial infection in intensive care units (ICUs). Existing methods for identifying VAP display low accuracy, and their use may delay antimicrobial therapy. VAP diagnostics derived from machine learning (ML) methods that utilize electronic health record (EHR) data have not yet been explored. The objective of this study is to compare the performance of a variety of ML models trained to predict whether VAP will be diagnosed during the patient stay.A retrospective study examined data from 6126 adult ICU encounters lasting at least 48 hours following the initiation of mechanical ventilation. The gold standard was the presence of a diagnostic code for VAP. Five different ML models were trained to predict VAP 48 hours after initiation of mechanical ventilation. Model performance was evaluated with regard to the area under the receiver operating characteristic (AUROC) curve on a 20% hold-out test set. Feature importance was measured in terms of Shapley values.The highest performing model achieved an AUROC value of 0.854. The most important features for the best-performing model were the length of time on mechanical ventilation, the presence of antibiotics, sputum test frequency, and the most recent Glasgow Coma Scale assessment.Supervised ML using patient EHR data is promising for VAP diagnosis and warrants further validation. This tool has the potential to aid the timely diagnosis of VAP.


Assuntos
Previsões/métodos , Aprendizado de Máquina/normas , Pneumonia Associada à Ventilação Mecânica/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Boston , Registros Eletrônicos de Saúde/estatística & dados numéricos , Feminino , Humanos , Unidades de Terapia Intensiva/organização & administração , Unidades de Terapia Intensiva/estatística & dados numéricos , Aprendizado de Máquina/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Respiração Artificial/efeitos adversos , Estudos Retrospectivos , Sensibilidade e Especificidade
14.
Healthc Technol Lett ; 8(6): 139-147, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34938570

RESUMO

Diagnosis and appropriate intervention for myocardial infarction (MI) are time-sensitive but rely on clinical measures that can be progressive and initially inconclusive, underscoring the need for an accurate and early predictor of MI to support diagnostic and clinical management decisions. The objective of this study was to develop a machine learning algorithm (MLA) to predict MI diagnosis based on electronic health record data (EHR) readily available during Emergency Department assessment. An MLA was developed using retrospective patient data. The MLA used patient data as they became available in the first 3 h of care to predict MI diagnosis (defined by International Classification of Diseases, 10th revision code) at any time during the encounter. The MLA obtained an area under the receiver operating characteristic curve of 0.87, sensitivity of 87% and specificity of 70%, outperforming the comparator scoring systems TIMI and GRACE on all metrics. An MLA can synthesize complex EHR data to serve as a clinically relevant risk stratification tool for MI.

15.
Clin Imaging ; 80: 268-273, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34425544

RESUMO

INTRODUCTION: The objective of this study was to assess seven configurations of six convolutional deep neural network architectures for classification of chest X-rays (CXRs) as COVID-19 positive or negative. METHODS: The primary dataset consisted of 294 COVID-19 positive and 294 COVID-19 negative CXRs, the latter comprising roughly equally many pneumonia, emphysema, fibrosis, and healthy images. We used six common convolutional neural network architectures, VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile and InceptionV3. We studied six models (one for each architecture) which were pre-trained on a vast repository of generic (non-CXR) images, as well as a seventh DenseNet121 model, which was pre-trained on a repository of CXR images. For each model, we replaced the output layers with custom fully connected layers for the task of binary classification of images as COVID-19 positive or negative. Performance metrics were calculated on a hold-out test set with CXRs from patients who were not included in the training/validation set. RESULTS: When pre-trained on generic images, the VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile, and InceptionV3 architectures respectively produced hold-out test set areas under the receiver operating characteristic (AUROCs) of 0.98, 0.95, 0.97, 0.95, 0.99, and 0.96 for the COVID-19 classification of CXRs. The X-ray pre-trained DenseNet121 model, in comparison, had a test set AUROC of 0.87. DISCUSSION: Common convolutional neural network architectures with parameters pre-trained on generic images yield high-performance and well-calibrated COVID-19 CXR classification.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Raios X
16.
Health Policy Technol ; 10(3): 100554, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34367900

RESUMO

Objective: In the wake of COVID-19, the United States (U.S.) developed a three stage plan to outline the parameters to determine when states may reopen businesses and ease travel restrictions. The guidelines also identify subpopulations of Americans deemed to be at high risk for severe disease should they contract COVID-19. These guidelines were based on population level demographics, rather than individual-level risk factors. As such, they may misidentify individuals at high risk for severe illness, and may therefore be of limited use in decisions surrounding resource allocation to vulnerable populations. The objective of this study was to evaluate a machine learning algorithm for prediction of serious illness due to COVID-19 using inpatient data collected from electronic health records. Methods: The algorithm was trained to identify patients for whom a diagnosis of COVID-19 was likely to result in hospitalization, and compared against four U.S. policy-based criteria: age over 65; having a serious underlying health condition; age over 65 or having a serious underlying health condition; and age over 65 and having a serious underlying health condition. Results: This algorithm identified 80% of patients at risk for hospitalization due to COVID-19, versus 62% identified by government guidelines. The algorithm also achieved a high specificity of 95%, outperforming government guidelines. Conclusions: This algorithm may identify individuals likely to require hospitalization should they contract COVID-19. This information may be useful to guide vaccine distribution, anticipate hospital resource needs, and assist health care policymakers to make care decisions in a more principled manner.

17.
JMIR Form Res ; 5(9): e28028, 2021 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-34398784

RESUMO

BACKGROUND: A high number of patients who are hospitalized with COVID-19 develop acute respiratory distress syndrome (ARDS). OBJECTIVE: In response to the need for clinical decision support tools to help manage the next pandemic during the early stages (ie, when limited labeled data are present), we developed machine learning algorithms that use semisupervised learning (SSL) techniques to predict ARDS development in general and COVID-19 populations based on limited labeled data. METHODS: SSL techniques were applied to 29,127 encounters with patients who were admitted to 7 US hospitals from May 1, 2019, to May 1, 2021. A recurrent neural network that used a time series of electronic health record data was applied to data that were collected when a patient's peripheral oxygen saturation level fell below the normal range (<97%) to predict the subsequent development of ARDS during the remaining duration of patients' hospital stay. Model performance was assessed with the area under the receiver operating characteristic curve and area under the precision recall curve of an external hold-out test set. RESULTS: For the whole data set, the median time between the first peripheral oxygen saturation measurement of <97% and subsequent respiratory failure was 21 hours. The area under the receiver operating characteristic curve for predicting subsequent ARDS development was 0.73 when the model was trained on a labeled data set of 6930 patients, 0.78 when the model was trained on the labeled data set that had been augmented with the unlabeled data set of 16,173 patients by using SSL techniques, and 0.84 when the model was trained on the entire training set of 23,103 labeled patients. CONCLUSIONS: In the context of using time-series inpatient data and a careful model training design, unlabeled data can be used to improve the performance of machine learning models when labeled data for predicting ARDS development are scarce or expensive.

18.
Kidney Int Rep ; 6(5): 1289-1298, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34013107

RESUMO

INTRODUCTION: Acute kidney injury (AKI) is common among hospitalized patients and has a significant impact on morbidity and mortality. Although early prediction of AKI has the potential to reduce adverse patient outcomes, it remains a difficult condition to predict and diagnose. The purpose of this study was to evaluate the ability of a machine learning algorithm to predict for AKI as defined by Kidney Disease: Improving Global Outcomes (KDIGO) stage 2 or 3 up to 48 hours in advance of onset using convolutional neural networks (CNNs) and patient electronic health record (EHR) data. METHODS: A CNN prediction system was developed to use EHR data gathered during patients' stays to predict AKI up to 48 hours before onset. A total of 12,347 patient encounters were retrospectively analyzed from the Medical Information Mart for Intensive Care III (MIMIC-III) database. An XGBoost AKI prediction model and the sequential organ failure assessment (SOFA) scoring system were used as comparators. The outcome was AKI onset. The model was trained on routinely collected patient EHR data. Measurements included area under the receiver operating characteristic (AUROC) curve, positive predictive value (PPV), and a battery of additional performance metrics for advance prediction of AKI onset. RESULTS: On a hold-out test set, the algorithm attained an AUROC of 0.86 and PPV of 0.24, relative to a cohort AKI prevalence of 7.62%, for long-horizon AKI prediction at a 48-hour window before onset. CONCLUSION: A CNN machine learning-based AKI prediction model outperforms XGBoost and the SOFA scoring system, revealing superior performance in predicting AKI 48 hours before onset, without reliance on serum creatinine (SCr) measurements.

19.
Clin Appl Thromb Hemost ; 27: 1076029621991185, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33625875

RESUMO

Deep venous thrombosis (DVT) is associated with significant morbidity, mortality, and increased healthcare costs. Standard scoring systems for DVT risk stratification often provide insufficient stratification of hospitalized patients and are unable to accurately predict which inpatients are most likely to present with DVT. There is a continued need for tools which can predict DVT in hospitalized patients. We performed a retrospective study on a database collected from a large academic hospital, comprised of 99,237 total general ward or ICU patients, 2,378 of whom experienced a DVT during their hospital stay. Gradient boosted machine learning algorithms were developed to predict a patient's risk of developing DVT at 12- and 24-hour windows prior to onset. The primary outcome of interest was diagnosis of in-hospital DVT. The machine learning predictors obtained AUROCs of 0.83 and 0.85 for DVT risk prediction on hospitalized patients at 12- and 24-hour windows, respectively. At both 12 and 24 hours before DVT onset, the most important features for prediction of DVT were cancer history, VTE history, and internal normalized ratio (INR). Improved risk stratification may prevent unnecessary invasive testing in patients for whom DVT cannot be ruled out using existing methods. Improved risk stratification may also allow for more targeted use of prophylactic anticoagulants, as well as earlier diagnosis and treatment, preventing the development of pulmonary emboli and other sequelae of DVT.


Assuntos
Aprendizado de Máquina/normas , Trombose Venosa/genética , Adolescente , Adulto , Idoso , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Medição de Risco , Fatores de Risco , Trombose Venosa/patologia , Adulto Jovem
20.
Clin Ther ; 43(5): 871-885, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33865643

RESUMO

PURPOSE: Coronavirus disease-2019 (COVID-19) continues to be a global threat and remains a significant cause of hospitalizations. Recent clinical guidelines have supported the use of corticosteroids or remdesivir in the treatment of COVID-19. However, uncertainty remains about which patients are most likely to benefit from treatment with either drug; such knowledge is crucial for avoiding preventable adverse effects, minimizing costs, and effectively allocating resources. This study presents a machine-learning system with the capacity to identify patients in whom treatment with a corticosteroid or remdesivir is associated with improved survival time. METHODS: Gradient-boosted decision-tree models used for predicting treatment benefit were trained and tested on data from electronic health records dated between December 18, 2019, and October 18, 2020, from adult patients (age ≥18 years) with COVID-19 in 10 US hospitals. Models were evaluated for performance in identifying patients with longer survival times when treated with a corticosteroid versus remdesivir. Fine and Gray proportional-hazards models were used for identifying significant findings in treated and nontreated patients, in a subset of patients who received supplemental oxygen, and in patients identified by the algorithm. Inverse probability-of-treatment weights were used to adjust for confounding. Models were trained and tested separately for each treatment. FINDINGS: Data from 2364 patients were included, with men comprising slightly more than 50% of the sample; 893 patients were treated with remdesivir, and 1471 were treated with a corticosteroid. After adjustment for confounding, neither corticosteroids nor remdesivir use was associated with increased survival time in the overall population or in the subpopulation that received supplemental oxygen. However, in the populations identified by the algorithms, both corticosteroids and remdesivir were significantly associated with an increase in survival time, with hazard ratios of 0.56 and 0.40, respectively (both, P = 0.04). IMPLICATIONS: Machine-learning methods have the capacity to identify hospitalized patients with COVID-19 in whom treatment with a corticosteroid or remdesivir is associated with an increase in survival time. These methods may help to improve patient outcomes and allocate resources during the COVID-19 crisis.


Assuntos
Monofosfato de Adenosina/análogos & derivados , Corticosteroides , Alanina/análogos & derivados , Antivirais , Tratamento Farmacológico da COVID-19 , Aprendizado de Máquina , Monofosfato de Adenosina/uso terapêutico , Adolescente , Corticosteroides/uso terapêutico , Adulto , Idoso , Idoso de 80 Anos ou mais , Alanina/uso terapêutico , Antivirais/uso terapêutico , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA