Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
J Clin Med ; 13(8)2024 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-38673682

RESUMO

Background/Objective: Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by lifelong impacts on functional social and daily living skills, and restricted, repetitive behaviors (RRBs). Applied behavior analysis (ABA), the gold-standard treatment for ASD, has been extensively validated. ABA access is hindered by limited availability of qualified professionals and logistical and financial barriers. Scientifically validated, parent-led ABA can fill the accessibility gap by overcoming treatment barriers. This retrospective cohort study examines how our ABA treatment model, utilizing parent behavior technicians (pBTs) to deliver ABA, impacts adaptive behaviors and interfering behaviors (IBs) in a cohort of children on the autism spectrum with varying ASD severity levels, and with or without clinically significant IBs. Methods: Clinical outcomes of 36 patients ages 3-15 years were assessed using longitudinal changes in Vineland-3 after 3+ months of pBT-delivered ABA treatment. Results: Within the pBT model, our patients demonstrated clinically significant improvements in Vineland-3 Composite, domain, and subdomain scores, and utilization was higher in severe ASD. pBTs utilized more prescribed ABA when children initiated treatment with clinically significant IBs, and these children also showed greater gains in their Composite scores. Study limitations include sample size, inter-rater reliability, potential assessment metric bias and schedule variability, and confounding intrinsic or extrinsic factors. Conclusion: Overall, our pBT model facilitated high treatment utilization and showed robust effectiveness, achieving improved adaptive behaviors and reduced IBs when compared to conventional ABA delivery. The pBT model is a strong contender to fill the widening treatment accessibility gap and represents a powerful tool for addressing systemic problems in ABA treatment delivery.

2.
Brain Inform ; 10(1): 7, 2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36862316

RESUMO

BACKGROUND: Applied behavioral analysis (ABA) is regarded as the gold standard treatment for autism spectrum disorder (ASD) and has the potential to improve outcomes for patients with ASD. It can be delivered at different intensities, which are classified as comprehensive or focused treatment approaches. Comprehensive ABA targets multiple developmental domains and involves 20-40 h/week of treatment. Focused ABA targets individual behaviors and typically involves 10-20 h/week of treatment. Determining the appropriate treatment intensity involves patient assessment by trained therapists, however, the final determination is highly subjective and lacks a standardized approach. In our study, we examined the ability of a machine learning (ML) prediction model to classify which treatment intensity would be most suited individually for patients with ASD who are undergoing ABA treatment. METHODS: Retrospective data from 359 patients diagnosed with ASD were analyzed and included in the training and testing of an ML model for predicting comprehensive or focused treatment for individuals undergoing ABA treatment. Data inputs included demographics, schooling, behavior, skills, and patient goals. A gradient-boosted tree ensemble method, XGBoost, was used to develop the prediction model, which was then compared against a standard of care comparator encompassing features specified by the Behavior Analyst Certification Board treatment guidelines. Prediction model performance was assessed via area under the receiver-operating characteristic curve (AUROC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS: The prediction model achieved excellent performance for classifying patients in the comprehensive versus focused treatment groups (AUROC: 0.895; 95% CI 0.811-0.962) and outperformed the standard of care comparator (AUROC 0.767; 95% CI 0.629-0.891). The prediction model also achieved sensitivity of 0.789, specificity of 0.808, PPV of 0.6, and NPV of 0.913. Out of 71 patients whose data were employed to test the prediction model, only 14 misclassifications occurred. A majority of misclassifications (n = 10) indicated comprehensive ABA treatment for patients that had focused ABA treatment as the ground truth, therefore still providing a therapeutic benefit. The three most important features contributing to the model's predictions were bathing ability, age, and hours per week of past ABA treatment. CONCLUSION: This research demonstrates that the ML prediction model performs well to classify appropriate ABA treatment plan intensity using readily available patient data. This may aid with standardizing the process for determining appropriate ABA treatments, which can facilitate initiation of the most appropriate treatment intensity for patients with ASD and improve resource allocation.

3.
Cureus ; 15(3): e36727, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36998917

RESUMO

Objective This study examines the implementation of a hybrid applied behavioral analysis (ABA) treatment model to determine its impact on autism spectrum disorder (ASD) patient outcomes.  Methods Retrospective data were collected for 25 pediatric patients to measure progress before and after the implementation of a hybrid ABA treatment model under which therapists consistently captured session notes electronically regarding goals and patient progress. ABA treatment was streamlined for consistent delivery, with improved software utilization for tracking scheduling and progress. Eleven goals within three domains (behavioral, social, and communication) were examined.  Results After the implementation of the hybrid model, the goal success rate improved by 9.7% compared to the baseline; 41.8% of goals showed improvement, 38.4% showed a flat trend, and 19.8% showed deterioration. Multiple goals trended upwards in 76% of the patients.  Conclusion This pilot study demonstrated that enhancing the consistency with which ABA treatment is monitored/delivered can improve patient outcomes as seen through improved attainment of goals.

4.
Diagnostics (Basel) ; 14(1)2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38201322

RESUMO

Mild cognitive impairment (MCI) is cognitive decline that can indicate future risk of Alzheimer's disease (AD). We developed and validated a machine learning algorithm (MLA), based on a gradient-boosted tree ensemble method, to analyze phenotypic data for individuals 55-88 years old (n = 493) diagnosed with MCI. Data were analyzed within multiple prediction windows and averaged to predict progression to AD within 24-48 months. The MLA outperformed the mini-mental state examination (MMSE) and three comparison models at all prediction windows on most metrics. Exceptions include sensitivity at 18 months (MLA and MMSE each achieved 0.600); and sensitivity at 30 and 42 months (MMSE marginally better). For all prediction windows, the MLA achieved AUROC ≥ 0.857 and NPV ≥ 0.800. With averaged data for the 24-48-month lookahead timeframe, the MLA outperformed MMSE on all metrics. This study demonstrates that machine learning may provide a more accurate risk assessment than the standard of care. This may facilitate care coordination, decrease healthcare expenditures, and maintain quality of life for patients at risk of progressing from MCI to AD.

5.
Pulm Circ ; 12(1): e12013, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35506114

RESUMO

Background: Pulmonary embolisms (PE) are life-threatening medical events, and early identification of patients experiencing a PE is essential to optimizing patient outcomes. Current tools for risk stratification of PE patients are limited and unable to predict PE events before their occurrence. Objective: We developed a machine learning algorithm (MLA) designed to identify patients at risk of PE before the clinical detection of onset in an inpatient population. Materials and Methods: Three machine learning (ML) models were developed on electronic health record data from 63,798 medical and surgical inpatients in a large US medical center. These models included logistic regression, neural network, and gradient boosted tree (XGBoost) models. All models used only routinely collected demographic, clinical, and laboratory information as inputs. All were evaluated for their ability to predict PE at the first time patient vital signs and lab measures required for the MLA to run were available. Performance was assessed with regard to the area under the receiver operating characteristic (AUROC), sensitivity, and specificity. Results: The model trained using XGBoost demonstrated the strongest performance for predicting PEs. The XGBoost model obtained an AUROC of 0.85, a sensitivity of 81%, and a specificity of 70%. The neural network and logistic regression models obtained AUROCs of 0.74 and 0.67, sensitivity of 81% and 81%, and specificity of 44% and 35%, respectively. Conclusions: This algorithm may improve patient outcomes through earlier recognition and prediction of PE, enabling earlier diagnosis and treatment of PE.

6.
JMIR Aging ; 5(2): e35373, 2022 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-35363146

RESUMO

BACKGROUND: Short-term fall prediction models that use electronic health records (EHRs) may enable the implementation of dynamic care practices that specifically address changes in individualized fall risk within senior care facilities. OBJECTIVE: The aim of this study is to implement machine learning (ML) algorithms that use EHR data to predict a 3-month fall risk in residents from a variety of senior care facilities providing different levels of care. METHODS: This retrospective study obtained EHR data (2007-2021) from Juniper Communities' proprietary database of 2785 individuals primarily residing in skilled nursing facilities, independent living facilities, and assisted living facilities across the United States. We assessed the performance of 3 ML-based fall prediction models and the Juniper Communities' fall risk assessment. Additional analyses were conducted to examine how changes in the input features, training data sets, and prediction windows affected the performance of these models. RESULTS: The Extreme Gradient Boosting model exhibited the highest performance, with an area under the receiver operating characteristic curve of 0.846 (95% CI 0.794-0.894), specificity of 0.848, diagnostic odds ratio of 13.40, and sensitivity of 0.706, while achieving the best trade-off in balancing true positive and negative rates. The number of active medications was the most significant feature associated with fall risk, followed by a resident's number of active diseases and several variables associated with vital signs, including diastolic blood pressure and changes in weight and respiratory rates. The combination of vital signs with traditional risk factors as input features achieved higher prediction accuracy than using either group of features alone. CONCLUSIONS: This study shows that the Extreme Gradient Boosting technique can use a large number of features from EHR data to make short-term fall predictions with a better performance than that of conventional fall risk assessments and other ML models. The integration of routinely collected EHR data, particularly vital signs, into fall prediction models may generate more accurate fall risk surveillance than models without vital signs. Our data support the use of ML models for dynamic, cost-effective, and automated fall predictions in different types of senior care facilities.

7.
JGH Open ; 6(3): 196-204, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35355667

RESUMO

Background: Non-alcoholic fatty liver (NAFL) can progress to the severe subtype non-alcoholic steatohepatitis (NASH) and/or fibrosis, which are associated with increased morbidity, mortality, and healthcare costs. Current machine learning studies detect NASH; however, this study is unique in predicting the progression of NAFL patients to NASH or fibrosis. Aim: To utilize clinical information from NAFL-diagnosed patients to predict the likelihood of progression to NASH or fibrosis. Methods: Data were collected from electronic health records of patients receiving a first-time NAFL diagnosis. A gradient boosted machine learning algorithm (XGBoost) as well as logistic regression (LR) and multi-layer perceptron (MLP) models were developed. A five-fold cross-validation grid search was utilized for hyperparameter optimization of variables, including maximum tree depth, learning rate, and number of estimators. Predictions of patients likely to progress to NASH or fibrosis within 4 years of initial NAFL diagnosis were made using demographic features, vital signs, and laboratory measurements. Results: The XGBoost algorithm achieved area under the receiver operating characteristic (AUROC) values of 0.79 for prediction of progression to NASH and 0.87 for fibrosis on both hold-out and external validation test sets. The XGBoost algorithm outperformed the LR and MLP models for both NASH and fibrosis prediction on all metrics. Conclusion: It is possible to accurately identify newly diagnosed NAFL patients at high risk of progression to NASH or fibrosis. Early identification of these patients may allow for increased clinical monitoring, more aggressive preventative measures to slow the progression of NAFL and fibrosis, and efficient clinical trial enrollment.

8.
Artigo em Inglês | MEDLINE | ID: mdl-35046014

RESUMO

INTRODUCTION: Diabetic kidney disease (DKD) accounts for the majority of increased risk of mortality for patients with diabetes, and eventually manifests in approximately half of those patients diagnosed with type 2 diabetes mellitus (T2DM). Although increased screening frequency can avoid delayed diagnoses, this is not uniformly implemented. The purpose of this study was to develop and retrospectively validate a machine learning algorithm (MLA) that predicts stages of DKD within 5 years upon diagnosis of T2DM. RESEARCH DESIGN AND METHODS: Two MLAs were trained to predict stages of DKD severity, and compared with the Centers for Disease Control and Prevention (CDC) risk score to evaluate performance. The models were validated on a hold-out test set as well as an external dataset sourced from separate facilities. RESULTS: The MLAs outperformed the CDC risk score in both the hold-out test and external datasets. Our algorithms achieved an area under the receiver operating characteristic curve (AUROC) of 0.75 on the hold-out set for prediction of any-stage DKD and an AUROC of over 0.82 for more severe endpoints, compared with the CDC risk score with an AUROC <0.70 on all test sets and endpoints. CONCLUSION: This retrospective study shows that an MLA can provide timely predictions of DKD among patients with recently diagnosed T2DM.


Assuntos
Diabetes Mellitus Tipo 2 , Nefropatias Diabéticas , Algoritmos , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/diagnóstico , Nefropatias Diabéticas/diagnóstico , Nefropatias Diabéticas/etiologia , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , Estados Unidos
9.
Am J Infect Control ; 50(3): 250-257, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35067382

RESUMO

BACKGROUND: Interventions to better prevent or manage Clostridioides difficile infection (CDI) may significantly reduce morbidity, mortality, and healthcare spending. METHODS: We present a retrospective study using electronic health record data from over 700 United States hospitals. A subset of hospitals was used to develop machine learning algorithms (MLAs); the remaining hospitals served as an external test set. Three MLAs were evaluated: gradient-boosted decision trees (XGBoost), Deep Long Short Term Memory neural network, and one-dimensional convolutional neural network. MLA performance was evaluated with area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, diagnostic odds ratios and likelihood ratios. RESULTS: The development dataset contained 13,664,840 inpatient encounters with 80,046 CDI encounters; the external dataset contained 1,149,088 inpatient encounters with 7,107 CDI encounters. The highest AUROCs were achieved for XGB, Deep Long Short Term Memory neural network, and one-dimensional convolutional neural network via abstaining from use of specialized training techniques, resampling in isolation, and resampling and output bias in combination, respectively. XGBoost achieved the highest AUROC. CONCLUSIONS: MLAs can predict future CDI in hospitalized patients using just 6 hours of data. In clinical practice, a machine-learning based tool may support prophylactic measures, earlier diagnosis, and more timely implementation of infection control measures.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/epidemiologia , Humanos , Aprendizado de Máquina , Curva ROC , Estudos Retrospectivos
10.
Am J Med Sci ; 364(1): 46-52, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35081403

RESUMO

BACKGROUND: The aim of the study was to quantify the relationship between acute kidney injury (AKI) and alcohol use disorder (AUD). METHODS: We used a large academic medical center and the MIMIC-III databases to quantify AKI disease and mortality burden as well as AKI disease progression in the AUD and non-AUD subpopulations. We used the MIMIC-III dataset to compare two different methods of encoding AKI: ICD-9 codes, and the Kidney Disease: Improving Global Outcomes scheme (KDIGO) definition. In addition to the AUD subpopulation, we also present analyses for the hepatorenal syndrome (HRS) and alcohol-related cirrhosis subpopulations identified via ICD-9/ICD-10 coding. RESULTS: In both the ICD-9 and KDIGO encodings of AKI, the AUD subpopulation had a higher incidence of AKI (ICD-9: 43.3% vs. 37.92% AKI in the non-AUD subpopulations; KDIGO: 48.65% vs. 40.53%) in the MIMIC-III dataset. In the academic dataset, the AUD subpopulation also had a higher incidence of AKI than the non-AUD subpopulation (ICD-9/ICD-10: 12.76% vs. 10.71%). The mortality rate of the subpopulation with both AKI and AUD, HRS, or alcohol-related cirrhosis was consistently higher than that of the subpopulation with only AKI in both datasets, including after adjusting for disease severity using two methods of severity estimation in the MIMIC-III dataset. Disease progression rates were similar for AUD and non-AUD subpopulations. CONCLUSIONS: Our work shows that the AUD patient subpopulation had a higher number of AKI patients than the non-AUD subpopulation, and that patients with both AKI and AUD, HRS, or alcohol-related cirrhosis had higher rates of mortality than the non-AUD subpopulation with AKI.


Assuntos
Injúria Renal Aguda , Alcoolismo , Síndrome Hepatorrenal , Injúria Renal Aguda/etiologia , Alcoolismo/complicações , Efeitos Psicossociais da Doença , Progressão da Doença , Mortalidade Hospitalar , Humanos , Cirrose Hepática/complicações , Estudos Retrospectivos
11.
Am J Infect Control ; 50(4): 440-445, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34428529

RESUMO

BACKGROUND: Central line-associated bloodstream infections (CLABSIs) are associated with significant morbidity, mortality, and increased healthcare costs. Despite the high prevalence of CLABSIs in the U.S., there are currently no tools to stratify a patient's risk of developing an infection as the result of central line placement. To this end, we have developed and validated a machine learning algorithm (MLA) that can predict a patient's likelihood of developing CLABSI using only electronic health record data in order to provide clinical decision support. METHODS: We created three machine learning models to retrospectively analyze electronic health record data from 27,619 patient encounters. The models were trained and validated using an 80:20 split for the train and test data. Patients designated as having a central line procedure based on International Statistical Classification of Diseases and Related Health Problems 10 codes were included. RESULTS: XGBoost was the highest performing MLA out of the three models, obtaining an AUROC of 0.762 for CLABSI risk prediction at 48 hours after the recorded time for central line placement. CONCLUSIONS: Our results demonstrate that MLAs may be effective clinical decision support tools for assessment of CLABSI risk and should be explored further for this purpose.


Assuntos
Infecções Relacionadas a Cateter , Cateterismo Venoso Central , Cateteres Venosos Centrais , Sepse , Infecções Relacionadas a Cateter/diagnóstico , Infecções Relacionadas a Cateter/epidemiologia , Cateteres Venosos Centrais/efeitos adversos , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , Sepse/diagnóstico , Sepse/epidemiologia
12.
Pancreatology ; 22(1): 43-50, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34690046

RESUMO

BACKGROUND: Acute pancreatitis (AP) is one of the most common causes of gastrointestinal-related hospitalizations in the United States. Severe AP (SAP) is associated with a mortality rate of nearly 30% and is distinguished from milder forms of AP. Risk stratification to identify SAP cases needing inpatient treatment is an important aspect of AP diagnosis. METHODS: We developed machine learning algorithms to predict which patients presenting with AP would require treatment for SAP. Three models were developed using logistic regression, neural networks, and XGBoost. Models were assessed in terms of area under the receiver operating characteristic (AUROC) and compared to the Harmless Acute Pancreatitis Score (HAPS) and Bedside Index for Severity in Acute Pancreatitis (BISAP) scores for AP risk stratification. RESULTS: 61,894 patients were used to train and test the machine learning models. With an AUROC value of 0.921, the model developed using XGBoost outperformed the logistic regression and neural network-based models. The XGBoost model also achieved a higher AUROC than both HAPS and BISAP for identifying patients who would be diagnosed with SAP. CONCLUSIONS: Machine learning may be able to improve the accuracy of AP risk stratification methods and allow for more timely treatment and initiation of interventions.


Assuntos
Aprendizado de Máquina , Pancreatite/diagnóstico , Doença Aguda , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Curva ROC , Estudos Retrospectivos , Índice de Gravidade de Doença
13.
Healthc Technol Lett ; 8(6): 139-147, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34938570

RESUMO

Diagnosis and appropriate intervention for myocardial infarction (MI) are time-sensitive but rely on clinical measures that can be progressive and initially inconclusive, underscoring the need for an accurate and early predictor of MI to support diagnostic and clinical management decisions. The objective of this study was to develop a machine learning algorithm (MLA) to predict MI diagnosis based on electronic health record data (EHR) readily available during Emergency Department assessment. An MLA was developed using retrospective patient data. The MLA used patient data as they became available in the first 3 h of care to predict MI diagnosis (defined by International Classification of Diseases, 10th revision code) at any time during the encounter. The MLA obtained an area under the receiver operating characteristic curve of 0.87, sensitivity of 87% and specificity of 70%, outperforming the comparator scoring systems TIMI and GRACE on all metrics. An MLA can synthesize complex EHR data to serve as a clinically relevant risk stratification tool for MI.

14.
Health Policy Technol ; 10(3): 100554, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34367900

RESUMO

Objective: In the wake of COVID-19, the United States (U.S.) developed a three stage plan to outline the parameters to determine when states may reopen businesses and ease travel restrictions. The guidelines also identify subpopulations of Americans deemed to be at high risk for severe disease should they contract COVID-19. These guidelines were based on population level demographics, rather than individual-level risk factors. As such, they may misidentify individuals at high risk for severe illness, and may therefore be of limited use in decisions surrounding resource allocation to vulnerable populations. The objective of this study was to evaluate a machine learning algorithm for prediction of serious illness due to COVID-19 using inpatient data collected from electronic health records. Methods: The algorithm was trained to identify patients for whom a diagnosis of COVID-19 was likely to result in hospitalization, and compared against four U.S. policy-based criteria: age over 65; having a serious underlying health condition; age over 65 or having a serious underlying health condition; and age over 65 and having a serious underlying health condition. Results: This algorithm identified 80% of patients at risk for hospitalization due to COVID-19, versus 62% identified by government guidelines. The algorithm also achieved a high specificity of 95%, outperforming government guidelines. Conclusions: This algorithm may identify individuals likely to require hospitalization should they contract COVID-19. This information may be useful to guide vaccine distribution, anticipate hospital resource needs, and assist health care policymakers to make care decisions in a more principled manner.

15.
JMIR Form Res ; 5(9): e28028, 2021 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-34398784

RESUMO

BACKGROUND: A high number of patients who are hospitalized with COVID-19 develop acute respiratory distress syndrome (ARDS). OBJECTIVE: In response to the need for clinical decision support tools to help manage the next pandemic during the early stages (ie, when limited labeled data are present), we developed machine learning algorithms that use semisupervised learning (SSL) techniques to predict ARDS development in general and COVID-19 populations based on limited labeled data. METHODS: SSL techniques were applied to 29,127 encounters with patients who were admitted to 7 US hospitals from May 1, 2019, to May 1, 2021. A recurrent neural network that used a time series of electronic health record data was applied to data that were collected when a patient's peripheral oxygen saturation level fell below the normal range (<97%) to predict the subsequent development of ARDS during the remaining duration of patients' hospital stay. Model performance was assessed with the area under the receiver operating characteristic curve and area under the precision recall curve of an external hold-out test set. RESULTS: For the whole data set, the median time between the first peripheral oxygen saturation measurement of <97% and subsequent respiratory failure was 21 hours. The area under the receiver operating characteristic curve for predicting subsequent ARDS development was 0.73 when the model was trained on a labeled data set of 6930 patients, 0.78 when the model was trained on the labeled data set that had been augmented with the unlabeled data set of 16,173 patients by using SSL techniques, and 0.84 when the model was trained on the entire training set of 23,103 labeled patients. CONCLUSIONS: In the context of using time-series inpatient data and a careful model training design, unlabeled data can be used to improve the performance of machine learning models when labeled data for predicting ARDS development are scarce or expensive.

16.
Clin Imaging ; 80: 268-273, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34425544

RESUMO

INTRODUCTION: The objective of this study was to assess seven configurations of six convolutional deep neural network architectures for classification of chest X-rays (CXRs) as COVID-19 positive or negative. METHODS: The primary dataset consisted of 294 COVID-19 positive and 294 COVID-19 negative CXRs, the latter comprising roughly equally many pneumonia, emphysema, fibrosis, and healthy images. We used six common convolutional neural network architectures, VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile and InceptionV3. We studied six models (one for each architecture) which were pre-trained on a vast repository of generic (non-CXR) images, as well as a seventh DenseNet121 model, which was pre-trained on a repository of CXR images. For each model, we replaced the output layers with custom fully connected layers for the task of binary classification of images as COVID-19 positive or negative. Performance metrics were calculated on a hold-out test set with CXRs from patients who were not included in the training/validation set. RESULTS: When pre-trained on generic images, the VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile, and InceptionV3 architectures respectively produced hold-out test set areas under the receiver operating characteristic (AUROCs) of 0.98, 0.95, 0.97, 0.95, 0.99, and 0.96 for the COVID-19 classification of CXRs. The X-ray pre-trained DenseNet121 model, in comparison, had a test set AUROC of 0.87. DISCUSSION: Common convolutional neural network architectures with parameters pre-trained on generic images yield high-performance and well-calibrated COVID-19 CXR classification.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Raios X
17.
Leuk Res ; 109: 106639, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34171604

RESUMO

BACKGROUND: Early myelodysplastic syndrome (MDS) diagnosis can allow physicians to provide early treatment, which may delay advancement of MDS and improve quality of life. However, MDS often goes unrecognized and is difficult to distinguish from other disorders. We developed a machine learning algorithm for the prediction of MDS one year prior to clinical diagnosis of the disease. METHODS: Retrospective analysis was performed on 790,470 patients over the age of 45 seen in the United States between 2007 and 2020. A gradient boosted decision tree model (XGB) was built to predict MDS diagnosis using vital signs, lab results, and demographics from the prior two years of patient data. The XGB model was compared to logistic regression (LR) and artificial neural network (ANN) models. The models did not use blast percentage and cytogenetics information as inputs. Predictions were made one year prior to MDS diagnosis as determined by International Classification of Diseases (ICD) codes, 9th and 10th revisions. Performance was assessed with regard to area under the receiver operating characteristic curve (AUROC). RESULTS: On a hold-out test set, the XGB model achieved an AUROC value of 0.87 for prediction of MDS one year prior to diagnosis, with a sensitivity of 0.79 and specificity of 0.80. The XGB model was compared against LR and ANN models, which achieved an AUROC of 0.838 and 0.832, respectively. CONCLUSIONS: Machine learning may allow for early MDS diagnosis MDS and more appropriate treatment administration.


Assuntos
Algoritmos , Aprendizado de Máquina , Síndromes Mielodisplásicas/diagnóstico , Redes Neurais de Computação , Qualidade de Vida , Medição de Risco/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Síndromes Mielodisplásicas/epidemiologia , Prognóstico , Curva ROC , Estudos Retrospectivos , Estados Unidos/epidemiologia
18.
Medicine (Baltimore) ; 100(23): e26246, 2021 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-34115013

RESUMO

ABSTRACT: Ventilator-associated pneumonia (VAP) is the most common and fatal nosocomial infection in intensive care units (ICUs). Existing methods for identifying VAP display low accuracy, and their use may delay antimicrobial therapy. VAP diagnostics derived from machine learning (ML) methods that utilize electronic health record (EHR) data have not yet been explored. The objective of this study is to compare the performance of a variety of ML models trained to predict whether VAP will be diagnosed during the patient stay.A retrospective study examined data from 6126 adult ICU encounters lasting at least 48 hours following the initiation of mechanical ventilation. The gold standard was the presence of a diagnostic code for VAP. Five different ML models were trained to predict VAP 48 hours after initiation of mechanical ventilation. Model performance was evaluated with regard to the area under the receiver operating characteristic (AUROC) curve on a 20% hold-out test set. Feature importance was measured in terms of Shapley values.The highest performing model achieved an AUROC value of 0.854. The most important features for the best-performing model were the length of time on mechanical ventilation, the presence of antibiotics, sputum test frequency, and the most recent Glasgow Coma Scale assessment.Supervised ML using patient EHR data is promising for VAP diagnosis and warrants further validation. This tool has the potential to aid the timely diagnosis of VAP.


Assuntos
Previsões/métodos , Aprendizado de Máquina/normas , Pneumonia Associada à Ventilação Mecânica/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Boston , Registros Eletrônicos de Saúde/estatística & dados numéricos , Feminino , Humanos , Unidades de Terapia Intensiva/organização & administração , Unidades de Terapia Intensiva/estatística & dados numéricos , Aprendizado de Máquina/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Respiração Artificial/efeitos adversos , Estudos Retrospectivos , Sensibilidade e Especificidade
19.
Kidney Int Rep ; 6(5): 1289-1298, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34013107

RESUMO

INTRODUCTION: Acute kidney injury (AKI) is common among hospitalized patients and has a significant impact on morbidity and mortality. Although early prediction of AKI has the potential to reduce adverse patient outcomes, it remains a difficult condition to predict and diagnose. The purpose of this study was to evaluate the ability of a machine learning algorithm to predict for AKI as defined by Kidney Disease: Improving Global Outcomes (KDIGO) stage 2 or 3 up to 48 hours in advance of onset using convolutional neural networks (CNNs) and patient electronic health record (EHR) data. METHODS: A CNN prediction system was developed to use EHR data gathered during patients' stays to predict AKI up to 48 hours before onset. A total of 12,347 patient encounters were retrospectively analyzed from the Medical Information Mart for Intensive Care III (MIMIC-III) database. An XGBoost AKI prediction model and the sequential organ failure assessment (SOFA) scoring system were used as comparators. The outcome was AKI onset. The model was trained on routinely collected patient EHR data. Measurements included area under the receiver operating characteristic (AUROC) curve, positive predictive value (PPV), and a battery of additional performance metrics for advance prediction of AKI onset. RESULTS: On a hold-out test set, the algorithm attained an AUROC of 0.86 and PPV of 0.24, relative to a cohort AKI prevalence of 7.62%, for long-horizon AKI prediction at a 48-hour window before onset. CONCLUSION: A CNN machine learning-based AKI prediction model outperforms XGBoost and the SOFA scoring system, revealing superior performance in predicting AKI 48 hours before onset, without reliance on serum creatinine (SCr) measurements.

20.
JMIR Public Health Surveill ; 7(6): e28265, 2021 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-33999831

RESUMO

BACKGROUND: Despite the limitations in the use of cycle threshold (CT) values for individual patient care, population distributions of CT values may be useful indicators of local outbreaks. OBJECTIVE: We aimed to conduct an exploratory analysis of potential correlations between the population distribution of cycle threshold (CT) values and COVID-19 dynamics, which were operationalized as percent positivity, transmission rate (Rt), and COVID-19 hospitalization count. METHODS: In total, 148,410 specimens collected between September 15, 2020, and January 11, 2021, from the greater El Paso area were processed in the Dascena COVID-19 Laboratory. The daily median CT value, daily Rt, daily count of COVID-19 hospitalizations, daily change in percent positivity, and rolling averages of these features were plotted over time. Two-way scatterplots and linear regression were used to evaluate possible associations between daily median CT values and outbreak measures. Cross-correlation plots were used to determine whether a time delay existed between changes in daily median CT values and measures of community disease dynamics. RESULTS: Daily median CT values negatively correlated with the daily Rt values (P<.001), the daily COVID-19 hospitalization counts (with a 33-day time delay; P<.001), and the daily changes in percent positivity among testing samples (P<.001). Despite visual trends suggesting time delays in the plots for median CT values and outbreak measures, a statistically significant delay was only detected between changes in median CT values and COVID-19 hospitalization counts (P<.001). CONCLUSIONS: This study adds to the literature by analyzing samples collected from an entire geographical area and contextualizing the results with other research investigating population CT values.


Assuntos
Teste de Ácido Nucleico para COVID-19/estatística & dados numéricos , COVID-19/epidemiologia , Hospitalização/estatística & dados numéricos , Adulto , COVID-19/transmissão , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , SARS-CoV-2 , Texas , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA