RESUMO
OBJECTIVES: Machine learning algorithms can outperform older methods in predicting clinical deterioration, but rigorous prospective data on their real-world efficacy are limited. We hypothesized that real-time machine learning generated alerts sent directly to front-line providers would reduce escalations. DESIGN: Single-center prospective pragmatic nonrandomized clustered clinical trial. SETTING: Academic tertiary care medical center. PATIENTS: Adult patients admitted to four medical-surgical units. Assignment to intervention or control arms was determined by initial unit admission. INTERVENTIONS: Real-time alerts stratified according to predicted likelihood of deterioration sent either to the primary team or directly to the rapid response team (RRT). Clinical care and interventions were at the providers' discretion. For the control units, alerts were generated but not sent, and standard RRT activation criteria were used. MEASUREMENTS AND MAIN RESULTS: The primary outcome was the rate of escalation per 1000 patient bed days. Secondary outcomes included the frequency of orders for fluids, medications, and diagnostic tests, and combined in-hospital and 30-day mortality. Propensity score modeling with stabilized inverse probability of treatment weight (IPTW) was used to account for differences between groups. Data from 2740 patients enrolled between July 2019 and March 2020 were analyzed (1488 intervention, 1252 control). Average age was 66.3 years and 1428 participants (52%) were female. The rate of escalation was 12.3 vs. 11.3 per 1000 patient bed days (difference, 1.0; 95% CI, -2.8 to 4.7) and IPTW adjusted incidence rate ratio 1.43 (95% CI, 1.16-1.78; p < 0.001). Patients in the intervention group were more likely to receive cardiovascular medication orders (16.1% vs. 11.3%; 4.7%; 95% CI, 2.1-7.4%) and IPTW adjusted relative risk (RR) (1.74; 95% CI, 1.39-2.18; p < 0.001). Combined in-hospital and 30-day-mortality was lower in the intervention group (7% vs. 9.3%; -2.4%; 95% CI, -4.5% to -0.2%) and IPTW adjusted RR (0.76; 95% CI, 0.58-0.99; p = 0.045). CONCLUSIONS: Real-time machine learning alerts do not reduce the rate of escalation but may reduce mortality.
Assuntos
Deterioração Clínica , Aprendizado de Máquina , Humanos , Feminino , Masculino , Estudos Prospectivos , Pessoa de Meia-Idade , Idoso , Equipe de Respostas Rápidas de Hospitais/organização & administração , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Mortalidade HospitalarRESUMO
BACKGROUND: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST-Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. METHODS: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID-19 and had a length of stay of ≤ 30 days. RESULTS: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST-Plus-assisted RD evaluations. The lag between admission and diagnosis improved with MUST-Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre-/post-implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. CONCLUSION: MUST-Plus, a machine learning-based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning-based processes to improve malnutrition screening and facilitate timely intervention.
Assuntos
Aprendizado de Máquina , Desnutrição , Programas de Rastreamento , Avaliação Nutricional , Humanos , Estudos Retrospectivos , Desnutrição/diagnóstico , Pessoa de Meia-Idade , Masculino , Feminino , Cidade de Nova Iorque , Idoso , Medição de Risco/métodos , Programas de Rastreamento/métodos , Adulto , Hospitalização , Idoso de 80 Anos ou maisRESUMO
BACKGROUND: Early reports indicate that AKI is common among patients with coronavirus disease 2019 (COVID-19) and associated with worse outcomes. However, AKI among hospitalized patients with COVID-19 in the United States is not well described. METHODS: This retrospective, observational study involved a review of data from electronic health records of patients aged ≥18 years with laboratory-confirmed COVID-19 admitted to the Mount Sinai Health System from February 27 to May 30, 2020. We describe the frequency of AKI and dialysis requirement, AKI recovery, and adjusted odds ratios (aORs) with mortality. RESULTS: Of 3993 hospitalized patients with COVID-19, AKI occurred in 1835 (46%) patients; 347 (19%) of the patients with AKI required dialysis. The proportions with stages 1, 2, or 3 AKI were 39%, 19%, and 42%, respectively. A total of 976 (24%) patients were admitted to intensive care, and 745 (76%) experienced AKI. Of the 435 patients with AKI and urine studies, 84% had proteinuria, 81% had hematuria, and 60% had leukocyturia. Independent predictors of severe AKI were CKD, men, and higher serum potassium at admission. In-hospital mortality was 50% among patients with AKI versus 8% among those without AKI (aOR, 9.2; 95% confidence interval, 7.5 to 11.3). Of survivors with AKI who were discharged, 35% had not recovered to baseline kidney function by the time of discharge. An additional 28 of 77 (36%) patients who had not recovered kidney function at discharge did so on posthospital follow-up. CONCLUSIONS: AKI is common among patients hospitalized with COVID-19 and is associated with high mortality. Of all patients with AKI, only 30% survived with recovery of kidney function by the time of discharge.
Assuntos
Injúria Renal Aguda/etiologia , COVID-19/complicações , SARS-CoV-2 , Injúria Renal Aguda/epidemiologia , Injúria Renal Aguda/terapia , Injúria Renal Aguda/urina , Idoso , Idoso de 80 Anos ou mais , COVID-19/mortalidade , Feminino , Hematúria/etiologia , Mortalidade Hospitalar , Hospitais Privados/estatística & dados numéricos , Hospitais Urbanos/estatística & dados numéricos , Humanos , Incidência , Pacientes Internados , Leucócitos , Masculino , Pessoa de Meia-Idade , Cidade de Nova Iorque/epidemiologia , Proteinúria/etiologia , Diálise Renal , Estudos Retrospectivos , Resultado do Tratamento , Urina/citologiaRESUMO
Cybercrime is estimated to have cost the global economy just under USD 1 trillion in 2020, indicating an increase of more than 50% since 2018. With the average cyber insurance claim rising from USD 145,000 in 2019 to USD 359,000 in 2020, there is a growing necessity for better cyber information sources, standardised databases, mandatory reporting and public awareness. This research analyses the extant academic and industry literature on cybersecurity and cyber risk management with a particular focus on data availability. From a preliminary search resulting in 5219 cyber peer-reviewed studies, the application of the systematic methodology resulted in 79 unique datasets. We posit that the lack of available data on cyber risk poses a serious problem for stakeholders seeking to tackle this issue. In particular, we identify a lacuna in open databases that undermine collective endeavours to better manage this set of risks. The resulting data evaluation and categorisation will support cybersecurity researchers and the insurance industry in their efforts to comprehend, metricise and manage cyber risks. Supplementary Information: The online version contains supplementary material available at 10.1057/s41288-022-00266-6.
RESUMO
[Figure: see text].
Assuntos
COVID-19/complicações , Hemorragias Intracranianas/complicações , AVC Isquêmico/complicações , Trombose dos Seios Intracranianos/complicações , Trombose Venosa/complicações , Adulto , Idoso , COVID-19/epidemiologia , Feminino , Geografia , Gastos em Saúde , Humanos , Cooperação Internacional , Hemorragias Intracranianas/epidemiologia , AVC Isquêmico/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Risco , Trombose dos Seios Intracranianos/epidemiologia , Resultado do Tratamento , Trombose Venosa/epidemiologia , Adulto JovemRESUMO
OBJECTIVE: Malnutrition among hospital patients, a frequent, yet under-diagnosed problem is associated with adverse impact on patient outcome and health care costs. Development of highly accurate malnutrition screening tools is, therefore, essential for its timely detection, for providing nutritional care, and for addressing the concerns related to the suboptimal predictive value of the conventional screening tools, such as the Malnutrition Universal Screening Tool (MUST). We aimed to develop a machine learning (ML) based classifier (MUST-Plus) for more accurate prediction of malnutrition. METHOD: A retrospective cohort with inpatient data consisting of anthropometric, lab biochemistry, clinical data, and demographics from adult (≥ 18 years) admissions at a large tertiary health care system between January 2017 and July 2018 was used. The registered dietitian (RD) nutritional assessments were used as the gold standard outcome label. The cohort was randomly split (70:30) into training and test sets. A random forest model was trained using 10-fold cross-validation on training set, and its predictive performance on test set was compared to MUST. RESULTS: In all, 13.3% of admissions were associated with malnutrition in the test cohort. MUST-Plus provided 73.07% (95% confidence interval [CI]: 69.61%-76.33%) sensitivity, 76.89% (95% CI: 75.64%-78.11%) specificity, and 83.5% (95% CI: 82.0%-85.0%) area under the receiver operating curve (AUC). Compared to classic MUST, MUST-Plus demonstrated 30% higher sensitivity, 6% higher specificity, and 17% increased AUC. CONCLUSIONS: ML-based MUST-Plus provided superior performance in identifying malnutrition compared to the classic MUST. The tool can be used for improving the operational efficiency of RDs by timely referrals of high-risk patients.
Assuntos
Desnutrição , Avaliação Nutricional , Adulto , Humanos , Aprendizado de Máquina , Desnutrição/diagnóstico , Programas de Rastreamento , Estudos RetrospectivosRESUMO
BACKGROUND: COVID-19 has infected millions of people worldwide and is responsible for several hundred thousand fatalities. The COVID-19 pandemic has necessitated thoughtful resource allocation and early identification of high-risk patients. However, effective methods to meet these needs are lacking. OBJECTIVE: The aims of this study were to analyze the electronic health records (EHRs) of patients who tested positive for COVID-19 and were admitted to hospitals in the Mount Sinai Health System in New York City; to develop machine learning models for making predictions about the hospital course of the patients over clinically meaningful time horizons based on patient characteristics at admission; and to assess the performance of these models at multiple hospitals and time points. METHODS: We used Extreme Gradient Boosting (XGBoost) and baseline comparator models to predict in-hospital mortality and critical events at time windows of 3, 5, 7, and 10 days from admission. Our study population included harmonized EHR data from five hospitals in New York City for 4098 COVID-19-positive patients admitted from March 15 to May 22, 2020. The models were first trained on patients from a single hospital (n=1514) before or on May 1, externally validated on patients from four other hospitals (n=2201) before or on May 1, and prospectively validated on all patients after May 1 (n=383). Finally, we established model interpretability to identify and rank variables that drive model predictions. RESULTS: Upon cross-validation, the XGBoost classifier outperformed baseline models, with an area under the receiver operating characteristic curve (AUC-ROC) for mortality of 0.89 at 3 days, 0.85 at 5 and 7 days, and 0.84 at 10 days. XGBoost also performed well for critical event prediction, with an AUC-ROC of 0.80 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. In external validation, XGBoost achieved an AUC-ROC of 0.88 at 3 days, 0.86 at 5 days, 0.86 at 7 days, and 0.84 at 10 days for mortality prediction. Similarly, the unimputed XGBoost model achieved an AUC-ROC of 0.78 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. Trends in performance on prospective validation sets were similar. At 7 days, acute kidney injury on admission, elevated LDH, tachypnea, and hyperglycemia were the strongest drivers of critical event prediction, while higher age, anion gap, and C-reactive protein were the strongest drivers of mortality prediction. CONCLUSIONS: We externally and prospectively trained and validated machine learning models for mortality and critical events for patients with COVID-19 at different time horizons. These models identified at-risk patients and uncovered underlying relationships that predicted outcomes.
Assuntos
Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/mortalidade , Aprendizado de Máquina/normas , Pneumonia Viral/diagnóstico , Pneumonia Viral/mortalidade , Injúria Renal Aguda/epidemiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Betacoronavirus , COVID-19 , Estudos de Coortes , Registros Eletrônicos de Saúde , Feminino , Mortalidade Hospitalar , Hospitalização/estatística & dados numéricos , Hospitais , Humanos , Masculino , Pessoa de Meia-Idade , Cidade de Nova Iorque/epidemiologia , Pandemias , Prognóstico , Curva ROC , Medição de Risco/métodos , Medição de Risco/normas , SARS-CoV-2 , Adulto JovemRESUMO
Importance: Machine learning has potential to transform cancer care by helping clinicians prioritize patients for serious illness conversations. However, models need to be evaluated for unequal performance across racial groups (ie, racial bias) so that existing racial disparities are not exacerbated. Objective: To evaluate whether racial bias exists in a predictive machine learning model that identifies 180-day cancer mortality risk among patients with solid malignant tumors. Design, Setting, and Participants: In this cohort study, a machine learning model to predict cancer mortality for patients aged 21 years or older diagnosed with cancer between January 2016 and December 2021 was developed with a random forest algorithm using retrospective data from the Mount Sinai Health System cancer registry, Social Security Death Index, and electronic health records up to the date when databases were accessed for cohort extraction (February 2022). Exposure: Race category. Main Outcomes and Measures: The primary outcomes were model discriminatory performance (area under the receiver operating characteristic curve [AUROC], F1 score) among each race category (Asian, Black, Native American, White, and other or unknown) and fairness metrics (equal opportunity, equalized odds, and disparate impact) among each pairwise comparison of race categories. True-positive rate ratios represented equal opportunity; both true-positive and false-positive rate ratios, equalized odds; and the percentage of predictive positive rate ratios, disparate impact. All metrics were estimated as a proportion or ratio, with variability captured through 95% CIs. The prespecified criterion for the model's clinical use was a threshold of at least 80% for fairness metrics across different racial groups to ensure the model's prediction would not be biased against any specific race. Results: The test validation dataset included 43â¯274 patients with balanced demographics. Mean (SD) age was 64.09 (14.26) years, with 49.6% older than 65 years. A total of 53.3% were female; 9.5%, Asian; 18.9%, Black; 0.1%, Native American; 52.2%, White; and 19.2%, other or unknown race; 0.1% had missing race data. A total of 88.9% of patients were alive, and 11.1% were dead. The AUROCs, F1 scores, and fairness metrics maintained reasonable concordance among the racial subgroups: the AUROCs ranged from 0.75 (95% CI, 0.72-0.78) for Asian patients and 0.75 (95% CI, 0.73-0.77) for Black patients to 0.77 (95% CI, 0.75-0.79) for patients with other or unknown race; F1 scores, from 0.32 (95% CI, 0.32-0.33) for White patients to 0.40 (95% CI, 0.39-0.42) for Black patients; equal opportunity ratios, from 0.96 (95% CI, 0.95-0.98) for Black patients compared with White patients to 1.02 (95% CI, 1.00-1.04) for Black patients compared with patients with other or unknown race; equalized odds ratios, from 0.87 (95% CI, 0.85-0.92) for Black patients compared with White patients to 1.16 (1.10-1.21) for Black patients compared with patients with other or unknown race; and disparate impact ratios, from 0.86 (95% CI, 0.82-0.89) for Black patients compared with White patients to 1.17 (95% CI, 1.12-1.22) for Black patients compared with patients with other or unknown race. Conclusions and Relevance: In this cohort study, the lack of significant variation in performance or fairness metrics indicated an absence of racial bias, suggesting that the model fairly identified cancer mortality risk across racial groups. It remains essential to consistently review the model's application in clinical settings to ensure equitable patient care.
Assuntos
Aprendizado de Máquina , Neoplasias , Humanos , Neoplasias/mortalidade , Neoplasias/etnologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Adulto , Grupos Raciais/estatística & dados numéricos , Estudos de Coortes , Racismo/estatística & dados numéricosRESUMO
Malnutrition is a frequently underdiagnosed condition leading to increased morbidity, mortality, and healthcare costs. The Mount Sinai Health System (MSHS) deployed a machine learning model (MUST-Plus) to detect malnutrition upon hospital admission. However, in diverse patient groups, a poorly calibrated model may lead to misdiagnosis, exacerbating health care disparities. We explored the model's calibration across different variables and methods to improve calibration. Data from adult patients admitted to five MSHS hospitals from January 1, 2021 - December 31, 2022, were analyzed. We compared MUST-Plus prediction to the registered dietitian's formal assessment. Hierarchical calibration was assessed and compared between the recalibration sample (N = 49,562) of patients admitted between January 1, 2021 - December 31, 2022, and the hold-out sample (N = 17,278) of patients admitted between January 1, 2023 - September 30, 2023. Statistical differences in calibration metrics were tested using bootstrapping with replacement. Before recalibration, the overall model calibration intercept was -1.17 (95% CI: -1.20, -1.14), slope was 1.37 (95% CI: 1.34, 1.40), and Brier score was 0.26 (95% CI: 0.25, 0.26). Both weak and moderate measures of calibration were significantly different between White and Black patients and between male and female patients. Logistic recalibration significantly improved calibration of the model across race and gender in the hold-out sample. The original MUST-Plus model showed significant differences in calibration between White vs. Black patients. It also overestimated malnutrition in females compared to males. Logistic recalibration effectively reduced miscalibration across all patient subgroups. Continual monitoring and timely recalibration can improve model accuracy.
RESUMO
The decision to extubate patients on invasive mechanical ventilation is critical; however, clinician performance in identifying patients to liberate from the ventilator is poor. Machine Learning-based predictors using tabular data have been developed; however, these fail to capture the wide spectrum of data available. Here, we develop and validate a deep learning-based model using routinely collected chest X-rays to predict the outcome of attempted extubation. We included 2288 serial patients admitted to the Medical ICU at an urban academic medical center, who underwent invasive mechanical ventilation, with at least one intubated CXR, and a documented extubation attempt. The last CXR before extubation for each patient was taken and split 79/21 for training/testing sets, then transfer learning with k-fold cross-validation was used on a pre-trained ResNet50 deep learning architecture. The top three models were ensembled to form a final classifier. The Grad-CAM technique was used to visualize image regions driving predictions. The model achieved an AUC of 0.66, AUPRC of 0.94, sensitivity of 0.62, and specificity of 0.60. The model performance was improved compared to the Rapid Shallow Breathing Index (AUC 0.61) and the only identified previous study in this domain (AUC 0.55), but significant room for improvement and experimentation remains.
RESUMO
BACKGROUND: Predicting hospitalization from nurse triage notes has the potential to augment care. However, there needs to be careful considerations for which models to choose for this goal. Specifically, health systems will have varying degrees of computational infrastructure available and budget constraints. OBJECTIVE: To this end, we compared the performance of the deep learning, Bidirectional Encoder Representations from Transformers (BERT)-based model, Bio-Clinical-BERT, with a bag-of-words (BOW) logistic regression (LR) model incorporating term frequency-inverse document frequency (TF-IDF). These choices represent different levels of computational requirements. METHODS: A retrospective analysis was conducted using data from 1,391,988 patients who visited emergency departments in the Mount Sinai Health System spanning from 2017 to 2022. The models were trained on 4 hospitals' data and externally validated on a fifth hospital's data. RESULTS: The Bio-Clinical-BERT model achieved higher areas under the receiver operating characteristic curve (0.82, 0.84, and 0.85) compared to the BOW-LR-TF-IDF model (0.81, 0.83, and 0.84) across training sets of 10,000; 100,000; and ~1,000,000 patients, respectively. Notably, both models proved effective at using triage notes for prediction, despite the modest performance gap. CONCLUSIONS: Our findings suggest that simpler machine learning models such as BOW-LR-TF-IDF could serve adequately in resource-limited settings. Given the potential implications for patient care and hospital resource management, further exploration of alternative models and techniques is warranted to enhance predictive performance in this critical domain. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.1101/2023.08.07.23293699.
RESUMO
Introduction: Depression and its components significantly impact dementia prediction and severity, necessitating reliable objective measures for quantification. Methods: We investigated associations between emotion-based speech measures (valence, arousal, and dominance) during picture descriptions and depression dimensions derived from the geriatric depression scale (GDS, dysphoria, withdrawal-apathy-vigor (WAV), anxiety, hopelessness, and subjective memory complaint). Results: Higher WAV was associated with more negative valence (estimate = -0.133, p = 0.030). While interactions of apolipoprotein E (APOE) 4 status with depression dimensions on emotional valence did not reach significance, there was a trend for more negative valence with higher dysphoria in those with at least one APOE4 allele (estimate = -0.404, p = 0.0846). Associations were similar irrespective of dementia severity. Discussion: Our study underscores the potential utility of speech biomarkers in characterizing depression dimensions. In future research, using emotionally charged stimuli may enhance emotional measure elicitation. The role of APOE on the interaction of speech markers and depression dimensions warrants further exploration with greater sample sizes. Highlights: Participants reporting higher apathy used more negative words to describe a neutral picture.Those with higher dysphoria and at least one APOE4 allele also tended to use more negative words.Our results suggest the potential use of speech biomarkers in characterizing depression dimensions.
RESUMO
BACKGROUND: Machine learning (ML)-based clinical decision support systems (CDSS) are popular in clinical practice settings but are often criticized for being limited in usability, interpretability, and effectiveness. Evaluating the implementation of ML-based CDSS is critical to ensure CDSS is acceptable and useful to clinicians and helps them deliver high-quality health care. Malnutrition is a common and underdiagnosed condition among hospital patients, which can have serious adverse impacts. Early identification and treatment of malnutrition are important. OBJECTIVE: This study aims to evaluate the implementation of an ML tool, Malnutrition Universal Screening Tool (MUST)-Plus, that predicts hospital patients at high risk for malnutrition and identify best implementation practices applicable to this and other ML-based CDSS. METHODS: We conducted a qualitative postimplementation evaluation using in-depth interviews with registered dietitians (RDs) who use MUST-Plus output in their everyday work. After coding the data, we mapped emergent themes onto select domains of the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework. RESULTS: We interviewed 17 of the 24 RDs approached (71%), representing 37% of those who use MUST-Plus output. Several themes emerged: (1) enhancements to the tool were made to improve accuracy and usability; (2) MUST-Plus helped identify patients that would not otherwise be seen; perceived usefulness was highest in the original site; (3) perceived accuracy varied by respondent and site; (4) RDs valued autonomy in prioritizing patients; (5) depth of tool understanding varied by hospital and level; (6) MUST-Plus was integrated into workflows and electronic health records; and (7) RDs expressed a desire to eventually have 1 automated screener. CONCLUSIONS: Our findings suggest that continuous involvement of stakeholders at new sites given staff turnover is vital to ensure buy-in. Qualitative research can help identify the potential bias of ML tools and should be widely used to ensure health equity. Ongoing collaboration among CDSS developers, data scientists, and clinical providers may help refine CDSS for optimal use and improve the acceptability of CDSS in the clinical context.
RESUMO
BACKGROUND: Infusion failure may have severe consequences for patients receiving critical, short-half-life infusions. Continued interruptions to infusions can lead to subtherapeutic therapy. OBJECTIVE: This study aims to identify and rank determinants of the longevity of continuous infusions administered through syringe drivers, using nonlinear predictive models. Additionally, this study aims to evaluate key factors influencing infusion longevity and develop and test a model for predicting the likelihood of achieving successful infusion longevity. METHODS: Data were extracted from the event logs of smart pumps containing information on care profiles, medication types and concentrations, occlusion alarm settings, and the final infusion cessation cause. These data were then used to fit 5 nonlinear models and evaluate the best explanatory model. RESULTS: Random forest was the best-fit predictor, with an F1-score of 80.42, compared to 5 other models (mean F1-score 75.06; range 67.48-79.63). When applied to infusion data in an individual syringe driver data set, the predictor model found that the final medication concentration and medication type were of less significance to infusion longevity compared to the rate and care unit. For low-rate infusions, rates ranging from 2 to 2.8 mL/hr performed best for achieving a balance between infusion longevity and fluid load per infusion, with an occlusion versus no-occlusion ratio of 0.553. Rates between 0.8 and 1.2 mL/hr exhibited the poorest performance with a ratio of 1.604. Higher rates, up to 4 mL/hr, performed better in terms of occlusion versus no-occlusion ratios. CONCLUSIONS: This study provides clinicians with insights into the specific types of infusion that warrant more intense observation or proactive management of intravenous access; additionally, it can offer valuable information regarding the average duration of uninterrupted infusions that can be expected in these care areas. Optimizing rate settings to improve infusion longevity for continuous infusions, achieved through compounding to create customized concentrations for individual patients, may be possible in light of the study's outcomes. The study also highlights the potential of machine learning nonlinear models in predicting outcomes and life spans of specific therapies delivered via medical devices.
RESUMO
BACKGROUND: Early prediction of the need for invasive mechanical ventilation (IMV) in patients hospitalized with COVID-19 symptoms can help in the allocation of resources appropriately and improve patient outcomes by appropriately monitoring and treating patients at the greatest risk of respiratory failure. To help with the complexity of deciding whether a patient needs IMV, machine learning algorithms may help bring more prognostic value in a timely and systematic manner. Chest radiographs (CXRs) and electronic medical records (EMRs), typically obtained early in patients admitted with COVID-19, are the keys to deciding whether they need IMV. OBJECTIVE: We aimed to evaluate the use of a machine learning model to predict the need for intubation within 24 hours by using a combination of CXR and EMR data in an end-to-end automated pipeline. We included historical data from 2481 hospitalizations at The Mount Sinai Hospital in New York City. METHODS: CXRs were first resized, rescaled, and normalized. Then lungs were segmented from the CXRs by using a U-Net algorithm. After splitting them into a training and a test set, the training set images were augmented. The augmented images were used to train an image classifier to predict the probability of intubation with a prediction window of 24 hours by retraining a pretrained DenseNet model by using transfer learning, 10-fold cross-validation, and grid search. Then, in the final fusion model, we trained a random forest algorithm via 10-fold cross-validation by combining the probability score from the image classifier with 41 longitudinal variables in the EMR. Variables in the EMR included clinical and laboratory data routinely collected in the inpatient setting. The final fusion model gave a prediction likelihood for the need of intubation within 24 hours as well. RESULTS: At a prediction probability threshold of 0.5, the fusion model provided 78.9% (95% CI 59%-96%) sensitivity, 83% (95% CI 76%-89%) specificity, 0.509 (95% CI 0.34-0.67) F1-score, 0.874 (95% CI 0.80-0.94) area under the receiver operating characteristic curve (AUROC), and 0.497 (95% CI 0.32-0.65) area under the precision recall curve (AUPRC) on the holdout set. Compared to the image classifier alone, which had an AUROC of 0.577 (95% CI 0.44-0.73) and an AUPRC of 0.206 (95% CI 0.08-0.38), the fusion model showed significant improvement (P<.001). The most important predictor variables were respiratory rate, C-reactive protein, oxygen saturation, and lactate dehydrogenase. The imaging probability score ranked 15th in overall feature importance. CONCLUSIONS: We show that, when linked with EMR data, an automated deep learning image classifier improved performance in identifying hospitalized patients with severe COVID-19 at risk for intubation. With additional prospective and external validation, such a model may assist risk assessment and optimize clinical decision-making in choosing the best care plan during the critical stages of COVID-19.
RESUMO
Nanotechnology governance, particularly in relation to human and environmental concerns, remains a contested domain. In recent years, the creation of both a risk governance framework and council has been actively pursued. Part of the function of a governance framework is the communication to external stakeholders. Existing descriptions on the public perceptions of nanotechnology are generally positive with the attendant economic and societal benefits being forefront in that thinking. Debates on nanomaterials' risk tend to be dominated by expert groupings while the general public is largely unaware of the potential hazards. Communicating via social media has become an integral part of everyday life facilitating public connectedness around specific topics that was not feasible in the pre-digital age. When civilian passive stakeholders become active their frustration can quickly coalesce into a campaign of resistance, and once an issue starts to develop into a campaign it is difficult to ease the momentum. Simmering discussions with moderate local attention can gain international exposure resulting in pressure and it can, in some cases, quickly precipitate legislative action and/or economic consequences. This paper highlights the potential of such a runaway, twitterstorm. We conducted a sentiment analysis of tweets since 2006 focusing on silver, titanium and carbon-based nanomaterials. We further examined the sentiment expressed following the decision by the European Food Safety Authority (EFSA) to phase out the food additive titanium dioxide (E 171). Our analysis shows an engaged, attentive public, alert to announcements from industry and regulatory bodies. We demonstrate that risk governance frameworks, particularly the communication aspect of those structures must include a social media blueprint to counter misinformation and alleviate the potential impact of a social media induced regulatory and economic reaction.
RESUMO
BACKGROUND AND AIM: We analyzed an inclusive gradient boosting model to predict hospital admission from the emergency department (ED) at different time points. We compared its results to multiple models built exclusively at each time point. METHODS: This retrospective multisite study utilized ED data from the Mount Sinai Health System, NY, during 2015-2019. Data included tabular clinical features and free-text triage notes represented using bag-of-words. A full gradient boosting model, trained on data available at different time points (30, 60, 90, 120, and 150 min), was compared to single models trained exclusively at data available at each time point. This was conducted by concatenating the rows of data available at each time point to one data matrix for the full model, where each row is considered a separate case. RESULTS: The cohort included 1,043,345 ED visits. The full model showed comparable results to the single models at all time points (AUCs 0.84-0.88 for different time points for both the full and single models). CONCLUSION: A full model trained on data concatenated from different time points showed similar results to single models trained at each time point. An ML-based prediction model can use used for identifying hospital admission.
RESUMO
We have previously shown that after kindling (a model of temporal lobe epilepsy), the neuroactive steroid tetrahydrodeoxycorticosterone (THDOC) was unable to augment GABA type A receptor (GABA(A))-mediated synaptic currents occurring on pyramidal cells of the piriform cortex. Phosphorylation of GABA(A) receptors has been shown previously to alter the activity of THDOC, so we tested the hypothesis that kindling induces changes in the phosphorylation of GABA(A) receptors and this accounts for the loss in efficacy. To assay whether GABA(A) receptors are more phosphorylated after kindling, we examined the phosphorylation state of the ß3 subunit and found that it was increased. Incubation of brain slices with the protein kinase C activator phorbol 12-myristate 13-acetate (PMA) (100 nM) also increased phosphorylation in the same assay. In patch clamp, recordings from non-kindled rat brain slices PMA also reduced the activity of THDOC in a manner that was identical to what is observed after kindling. We also found that the tonic current was no longer augmented by THODC after kindling and PMA treatment. The protein kinase C (PKC) antagonist bisindolylmaleimide I blocked the effects PMA on the synaptic but not the tonic currents. However, the broad spectrum PKC antagonist staurosporine blocked the effects of PMA on the tonic currents, implying that different PKC isoforms phosphorylate GABA(A) receptors responsible for phasic and tonic currents. The phosphatase activator Li(+) palmitate restored the 'normal' activity of THDOC on synaptic currents in kindled brain slices but not the tonic currents. These data demonstrate that kindling enhances the phosphorylation state of GABA(A) receptors expressed in pyramidal neurons reducing THDOC efficacy.
Assuntos
Desoxicorticosterona/análogos & derivados , Potenciais Pós-Sinápticos Inibidores/efeitos dos fármacos , Excitação Neurológica/patologia , Neurotransmissores/farmacologia , Células Piramidais/efeitos dos fármacos , Receptores de GABA/metabolismo , Animais , Córtex Cerebral/patologia , Córtex Cerebral/fisiopatologia , Desoxicorticosterona/farmacologia , Inibidores Enzimáticos/farmacologia , Regulação da Expressão Gênica/efeitos dos fármacos , Técnicas In Vitro , Indóis/farmacologia , Masculino , Maleimidas/farmacologia , Técnicas de Patch-Clamp/métodos , Ésteres de Forbol/farmacologia , Fosforilação/efeitos dos fármacos , Fosforilação/fisiologia , Ratos , Ratos Sprague-Dawley , Receptores de GABA/genéticaRESUMO
BACKGROUND: From February 2020, both urban and rural Ireland witnessed the rapid proliferation of the COVID-19 disease throughout its counties. During this period, the national COVID-19 responses included stay-at-home directives issued by the state, subject to varying levels of enforcement. METHODS: In this paper, we present a new method to assess and rank the causes of Ireland COVID-19 deaths as it relates to mobility activities within each county provided by Google while taking into consideration the epidemiological confirmed positive cases reported per county. We used a network structure and rank propagation modelling approach using Personalised PageRank to reveal the importance of each mobility category linked to cases and deaths. Then a novel feature-selection method using relative prominent factors finds important features related to each county's death. Finally, we clustered the counties based on features selected with the network results using a customised network clustering algorithm for the research problem. FINDINGS: Our analysis reveals that the most important mobility trend categories that exhibit the strongest association to COVID-19 cases and deaths include retail and recreation and workplaces. This is the first time a network structure and rank propagation modelling approach has been used to link COVID-19 data to mobility patterns. The infection determinants landscape illustrated by the network results aligns soundly with county socio-economic and demographic features. The novel feature selection and clustering method presented clusters useful to policymakers, managers of the health sector, politicians and even sociologists. Finally, each county has a different impact on the national total.
RESUMO
BACKGROUND AND OBJECTIVES: AKI treated with dialysis initiation is a common complication of coronavirus disease 2019 (COVID-19) among hospitalized patients. However, dialysis supplies and personnel are often limited. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Using data from adult patients hospitalized with COVID-19 from five hospitals from the Mount Sinai Health System who were admitted between March 10 and December 26, 2020, we developed and validated several models (logistic regression, Least Absolute Shrinkage and Selection Operator (LASSO), random forest, and eXtreme GradientBoosting [XGBoost; with and without imputation]) for predicting treatment with dialysis or death at various time horizons (1, 3, 5, and 7 days) after hospital admission. Patients admitted to the Mount Sinai Hospital were used for internal validation, whereas the other hospitals formed part of the external validation cohort. Features included demographics, comorbidities, and laboratory and vital signs within 12 hours of hospital admission. RESULTS: A total of 6093 patients (2442 in training and 3651 in external validation) were included in the final cohort. Of the different modeling approaches used, XGBoost without imputation had the highest area under the receiver operating characteristic (AUROC) curve on internal validation (range of 0.93-0.98) and area under the precision-recall curve (AUPRC; range of 0.78-0.82) for all time points. XGBoost without imputation also had the highest test parameters on external validation (AUROC range of 0.85-0.87, and AUPRC range of 0.27-0.54) across all time windows. XGBoost without imputation outperformed all models with higher precision and recall (mean difference in AUROC of 0.04; mean difference in AUPRC of 0.15). Features of creatinine, BUN, and red cell distribution width were major drivers of the model's prediction. CONCLUSIONS: An XGBoost model without imputation for prediction of a composite outcome of either death or dialysis in patients positive for COVID-19 had the best performance, as compared with standard and other machine learning models. PODCAST: This article contains a podcast at https://www.asn-online.org/media/podcast/CJASN/2021_07_09_CJN17311120.mp3.