Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Crit Care Med ; 49(8): 1312-1321, 2021 08 01.
Article in English | MEDLINE | ID: mdl-33711001

ABSTRACT

OBJECTIVES: The National Early Warning Score, Modified Early Warning Score, and quick Sepsis-related Organ Failure Assessment can predict clinical deterioration. These scores exhibit only moderate performance and are often evaluated using aggregated measures over time. A simulated prospective validation strategy that assesses multiple predictions per patient-day would provide the best pragmatic evaluation. We developed a deep recurrent neural network deterioration model and conducted a simulated prospective evaluation. DESIGN: Retrospective cohort study. SETTING: Four hospitals in Pennsylvania. PATIENTS: Inpatient adults discharged between July 1, 2017, and June 30, 2019. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We trained a deep recurrent neural network and logistic regression model using data from electronic health records to predict hourly the 24-hour composite outcome of transfer to ICU or death. We analyzed 146,446 hospitalizations with 16.75 million patient-hours. The hourly event rate was 1.6% (12,842 transfers or deaths, corresponding to 260,295 patient-hours within the predictive horizon). On a hold-out dataset, the deep recurrent neural network achieved an area under the precision-recall curve of 0.042 (95% CI, 0.04-0.043), comparable with logistic regression model (0.043; 95% CI 0.041 to 0.045), and outperformed National Early Warning Score (0.034; 95% CI, 0.032-0.035), Modified Early Warning Score (0.028; 95% CI, 0.027- 0.03), and quick Sepsis-related Organ Failure Assessment (0.021; 95% CI, 0.021-0.022). For a fixed sensitivity of 50%, the deep recurrent neural network achieved a positive predictive value of 3.4% (95% CI, 3.4-3.5) and outperformed logistic regression model (3.1%; 95% CI 3.1-3.2), National Early Warning Score (2.0%; 95% CI, 2.0-2.0), Modified Early Warning Score (1.5%; 95% CI, 1.5-1.5), and quick Sepsis-related Organ Failure Assessment (1.5%; 95% CI, 1.5-1.5). CONCLUSIONS: Commonly used early warning scores for clinical decompensation, along with a logistic regression model and a deep recurrent neural network model, show very poor performance characteristics when assessed using a simulated prospective validation. None of these models may be suitable for real-time deployment.


Subject(s)
Clinical Deterioration , Critical Care/standards , Deep Learning/standards , Organ Dysfunction Scores , Sepsis/therapy , Adult , Humans , Male , Middle Aged , Pennsylvania , Retrospective Studies , Risk Assessment
2.
Ann Surg ; 273(5): 900-908, 2021 05 01.
Article in English | MEDLINE | ID: mdl-33074901

ABSTRACT

OBJECTIVE: The aim of this study was to systematically assess the application and potential benefits of natural language processing (NLP) in surgical outcomes research. SUMMARY BACKGROUND DATA: Widespread implementation of electronic health records (EHRs) has generated a massive patient data source. Traditional methods of data capture, such as billing codes and/or manual review of free-text narratives in EHRs, are highly labor-intensive, costly, subjective, and potentially prone to bias. METHODS: A literature search of PubMed, MEDLINE, Web of Science, and Embase identified all articles published starting in 2000 that used NLP models to assess perioperative surgical outcomes. Evaluation metrics of NLP systems were assessed by means of pooled analysis and meta-analysis. Qualitative synthesis was carried out to assess the results and risk of bias on outcomes. RESULTS: The present study included 29 articles, with over half (n = 15) published after 2018. The most common outcome identified using NLP was postoperative complications (n = 14). Compared to traditional non-NLP models, NLP models identified postoperative complications with higher sensitivity [0.92 (0.87-0.95) vs 0.58 (0.33-0.79), P < 0.001]. The specificities were comparable at 0.99 (0.96-1.00) and 0.98 (0.95-0.99), respectively. Using summary of likelihood ratio matrices, traditional non-NLP models have clinical utility for confirming documentation of outcomes/diagnoses, whereas NLP models may be reliably utilized for both confirming and ruling out documentation of outcomes/diagnoses. CONCLUSIONS: NLP usage to extract a range of surgical outcomes, particularly postoperative complications, is accelerating across disciplines and areas of clinical outcomes research. NLP and traditional non-NLP approaches demonstrate similar performance measures, but NLP is superior in ruling out documentation of surgical outcomes.


Subject(s)
Algorithms , Electronic Health Records/statistics & numerical data , Narration , Natural Language Processing , Surgical Procedures, Operative , Humans
3.
Crit Care Med ; 46(7): 1125-1132, 2018 07.
Article in English | MEDLINE | ID: mdl-29629986

ABSTRACT

OBJECTIVES: Early prediction of undesired outcomes among newly hospitalized patients could improve patient triage and prompt conversations about patients' goals of care. We evaluated the performance of logistic regression, gradient boosting machine, random forest, and elastic net regression models, with and without unstructured clinical text data, to predict a binary composite outcome of in-hospital death or ICU length of stay greater than or equal to 7 days using data from the first 48 hours of hospitalization. DESIGN: Retrospective cohort study with split sampling for model training and testing. SETTING: A single urban academic hospital. PATIENTS: All hospitalized patients who required ICU care at the Beth Israel Deaconess Medical Center in Boston, MA, from 2001 to 2012. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Among eligible 25,947 hospital admissions, we observed 5,504 (21.2%) in which patients died or had ICU length of stay greater than or equal to 7 days. The gradient boosting machine model had the highest discrimination without (area under the receiver operating characteristic curve, 0.83; 95% CI, 0.81-0.84) and with (area under the receiver operating characteristic curve, 0.89; 95% CI, 0.88-0.90) text-derived variables. Both gradient boosting machines and random forests outperformed logistic regression without text data (p < 0.001), whereas all models outperformed logistic regression with text data (p < 0.02). The inclusion of text data increased the discrimination of all four model types (p < 0.001). Among those models using text data, the increasing presence of terms "intubated" and "poor prognosis" were positively associated with mortality and ICU length of stay, whereas the term "extubated" was inversely associated with them. CONCLUSIONS: Variables extracted from unstructured clinical text from the first 48 hours of hospital admission using natural language processing techniques significantly improved the abilities of logistic regression and other machine learning models to predict which patients died or had long ICU stays. Learning health systems may adapt such models using open-source approaches to capture local variation in care patterns.


Subject(s)
Decision Support Techniques , Hospital Mortality , Intensive Care Units , Length of Stay/statistics & numerical data , Natural Language Processing , Aged , Female , Humans , Intensive Care Units/statistics & numerical data , Machine Learning , Male , Middle Aged , Patient Care Planning/statistics & numerical data , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL