Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Crit Care Med ; 52(7): 1007-1020, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38380992

RESUMO

OBJECTIVES: Machine learning algorithms can outperform older methods in predicting clinical deterioration, but rigorous prospective data on their real-world efficacy are limited. We hypothesized that real-time machine learning generated alerts sent directly to front-line providers would reduce escalations. DESIGN: Single-center prospective pragmatic nonrandomized clustered clinical trial. SETTING: Academic tertiary care medical center. PATIENTS: Adult patients admitted to four medical-surgical units. Assignment to intervention or control arms was determined by initial unit admission. INTERVENTIONS: Real-time alerts stratified according to predicted likelihood of deterioration sent either to the primary team or directly to the rapid response team (RRT). Clinical care and interventions were at the providers' discretion. For the control units, alerts were generated but not sent, and standard RRT activation criteria were used. MEASUREMENTS AND MAIN RESULTS: The primary outcome was the rate of escalation per 1000 patient bed days. Secondary outcomes included the frequency of orders for fluids, medications, and diagnostic tests, and combined in-hospital and 30-day mortality. Propensity score modeling with stabilized inverse probability of treatment weight (IPTW) was used to account for differences between groups. Data from 2740 patients enrolled between July 2019 and March 2020 were analyzed (1488 intervention, 1252 control). Average age was 66.3 years and 1428 participants (52%) were female. The rate of escalation was 12.3 vs. 11.3 per 1000 patient bed days (difference, 1.0; 95% CI, -2.8 to 4.7) and IPTW adjusted incidence rate ratio 1.43 (95% CI, 1.16-1.78; p < 0.001). Patients in the intervention group were more likely to receive cardiovascular medication orders (16.1% vs. 11.3%; 4.7%; 95% CI, 2.1-7.4%) and IPTW adjusted relative risk (RR) (1.74; 95% CI, 1.39-2.18; p < 0.001). Combined in-hospital and 30-day-mortality was lower in the intervention group (7% vs. 9.3%; -2.4%; 95% CI, -4.5% to -0.2%) and IPTW adjusted RR (0.76; 95% CI, 0.58-0.99; p = 0.045). CONCLUSIONS: Real-time machine learning alerts do not reduce the rate of escalation but may reduce mortality.


Assuntos
Deterioração Clínica , Aprendizado de Máquina , Humanos , Feminino , Masculino , Estudos Prospectivos , Pessoa de Meia-Idade , Idoso , Equipe de Respostas Rápidas de Hospitais/organização & administração , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Mortalidade Hospitalar
2.
J Hum Nutr Diet ; 37(3): 622-632, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38348579

RESUMO

BACKGROUND: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST-Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. METHODS: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID-19 and had a length of stay of ≤ 30 days. RESULTS: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST-Plus-assisted RD evaluations. The lag between admission and diagnosis improved with MUST-Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre-/post-implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. CONCLUSION: MUST-Plus, a machine learning-based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning-based processes to improve malnutrition screening and facilitate timely intervention.


Assuntos
Aprendizado de Máquina , Desnutrição , Programas de Rastreamento , Avaliação Nutricional , Humanos , Estudos Retrospectivos , Desnutrição/diagnóstico , Pessoa de Meia-Idade , Masculino , Feminino , Cidade de Nova Iorque , Idoso , Medição de Risco/métodos , Programas de Rastreamento/métodos , Adulto , Hospitalização , Idoso de 80 Anos ou mais
3.
J Am Soc Nephrol ; 32(1): 151-160, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32883700

RESUMO

BACKGROUND: Early reports indicate that AKI is common among patients with coronavirus disease 2019 (COVID-19) and associated with worse outcomes. However, AKI among hospitalized patients with COVID-19 in the United States is not well described. METHODS: This retrospective, observational study involved a review of data from electronic health records of patients aged ≥18 years with laboratory-confirmed COVID-19 admitted to the Mount Sinai Health System from February 27 to May 30, 2020. We describe the frequency of AKI and dialysis requirement, AKI recovery, and adjusted odds ratios (aORs) with mortality. RESULTS: Of 3993 hospitalized patients with COVID-19, AKI occurred in 1835 (46%) patients; 347 (19%) of the patients with AKI required dialysis. The proportions with stages 1, 2, or 3 AKI were 39%, 19%, and 42%, respectively. A total of 976 (24%) patients were admitted to intensive care, and 745 (76%) experienced AKI. Of the 435 patients with AKI and urine studies, 84% had proteinuria, 81% had hematuria, and 60% had leukocyturia. Independent predictors of severe AKI were CKD, men, and higher serum potassium at admission. In-hospital mortality was 50% among patients with AKI versus 8% among those without AKI (aOR, 9.2; 95% confidence interval, 7.5 to 11.3). Of survivors with AKI who were discharged, 35% had not recovered to baseline kidney function by the time of discharge. An additional 28 of 77 (36%) patients who had not recovered kidney function at discharge did so on posthospital follow-up. CONCLUSIONS: AKI is common among patients hospitalized with COVID-19 and is associated with high mortality. Of all patients with AKI, only 30% survived with recovery of kidney function by the time of discharge.


Assuntos
Injúria Renal Aguda/etiologia , COVID-19/complicações , SARS-CoV-2 , Injúria Renal Aguda/epidemiologia , Injúria Renal Aguda/terapia , Injúria Renal Aguda/urina , Idoso , Idoso de 80 Anos ou mais , COVID-19/mortalidade , Feminino , Hematúria/etiologia , Mortalidade Hospitalar , Hospitais Privados/estatística & dados numéricos , Hospitais Urbanos/estatística & dados numéricos , Humanos , Incidência , Pacientes Internados , Leucócitos , Masculino , Pessoa de Meia-Idade , Cidade de Nova Iorque/epidemiologia , Proteinúria/etiologia , Diálise Renal , Estudos Retrospectivos , Resultado do Tratamento , Urina/citologia
4.
Geneva Pap Risk Insur Issues Pract ; 47(3): 698-736, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35194352

RESUMO

Cybercrime is estimated to have cost the global economy just under USD 1 trillion in 2020, indicating an increase of more than 50% since 2018. With the average cyber insurance claim rising from USD 145,000 in 2019 to USD 359,000 in 2020, there is a growing necessity for better cyber information sources, standardised databases, mandatory reporting and public awareness. This research analyses the extant academic and industry literature on cybersecurity and cyber risk management with a particular focus on data availability. From a preliminary search resulting in 5219 cyber peer-reviewed studies, the application of the systematic methodology resulted in 79 unique datasets. We posit that the lack of available data on cyber risk poses a serious problem for stakeholders seeking to tackle this issue. In particular, we identify a lacuna in open databases that undermine collective endeavours to better manage this set of risks. The resulting data evaluation and categorisation will support cybersecurity researchers and the insurance industry in their efforts to comprehend, metricise and manage cyber risks. Supplementary Information: The online version contains supplementary material available at 10.1057/s41288-022-00266-6.

5.
Stroke ; 52(5): e117-e130, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33878892
6.
J Am Coll Nutr ; 40(1): 3-12, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32701397

RESUMO

OBJECTIVE: Malnutrition among hospital patients, a frequent, yet under-diagnosed problem is associated with adverse impact on patient outcome and health care costs. Development of highly accurate malnutrition screening tools is, therefore, essential for its timely detection, for providing nutritional care, and for addressing the concerns related to the suboptimal predictive value of the conventional screening tools, such as the Malnutrition Universal Screening Tool (MUST). We aimed to develop a machine learning (ML) based classifier (MUST-Plus) for more accurate prediction of malnutrition. METHOD: A retrospective cohort with inpatient data consisting of anthropometric, lab biochemistry, clinical data, and demographics from adult (≥ 18 years) admissions at a large tertiary health care system between January 2017 and July 2018 was used. The registered dietitian (RD) nutritional assessments were used as the gold standard outcome label. The cohort was randomly split (70:30) into training and test sets. A random forest model was trained using 10-fold cross-validation on training set, and its predictive performance on test set was compared to MUST. RESULTS: In all, 13.3% of admissions were associated with malnutrition in the test cohort. MUST-Plus provided 73.07% (95% confidence interval [CI]: 69.61%-76.33%) sensitivity, 76.89% (95% CI: 75.64%-78.11%) specificity, and 83.5% (95% CI: 82.0%-85.0%) area under the receiver operating curve (AUC). Compared to classic MUST, MUST-Plus demonstrated 30% higher sensitivity, 6% higher specificity, and 17% increased AUC. CONCLUSIONS: ML-based MUST-Plus provided superior performance in identifying malnutrition compared to the classic MUST. The tool can be used for improving the operational efficiency of RDs by timely referrals of high-risk patients.


Assuntos
Desnutrição , Avaliação Nutricional , Adulto , Humanos , Aprendizado de Máquina , Desnutrição/diagnóstico , Programas de Rastreamento , Estudos Retrospectivos
7.
J Med Internet Res ; 22(11): e24018, 2020 11 06.
Artigo em Inglês | MEDLINE | ID: mdl-33027032

RESUMO

BACKGROUND: COVID-19 has infected millions of people worldwide and is responsible for several hundred thousand fatalities. The COVID-19 pandemic has necessitated thoughtful resource allocation and early identification of high-risk patients. However, effective methods to meet these needs are lacking. OBJECTIVE: The aims of this study were to analyze the electronic health records (EHRs) of patients who tested positive for COVID-19 and were admitted to hospitals in the Mount Sinai Health System in New York City; to develop machine learning models for making predictions about the hospital course of the patients over clinically meaningful time horizons based on patient characteristics at admission; and to assess the performance of these models at multiple hospitals and time points. METHODS: We used Extreme Gradient Boosting (XGBoost) and baseline comparator models to predict in-hospital mortality and critical events at time windows of 3, 5, 7, and 10 days from admission. Our study population included harmonized EHR data from five hospitals in New York City for 4098 COVID-19-positive patients admitted from March 15 to May 22, 2020. The models were first trained on patients from a single hospital (n=1514) before or on May 1, externally validated on patients from four other hospitals (n=2201) before or on May 1, and prospectively validated on all patients after May 1 (n=383). Finally, we established model interpretability to identify and rank variables that drive model predictions. RESULTS: Upon cross-validation, the XGBoost classifier outperformed baseline models, with an area under the receiver operating characteristic curve (AUC-ROC) for mortality of 0.89 at 3 days, 0.85 at 5 and 7 days, and 0.84 at 10 days. XGBoost also performed well for critical event prediction, with an AUC-ROC of 0.80 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. In external validation, XGBoost achieved an AUC-ROC of 0.88 at 3 days, 0.86 at 5 days, 0.86 at 7 days, and 0.84 at 10 days for mortality prediction. Similarly, the unimputed XGBoost model achieved an AUC-ROC of 0.78 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. Trends in performance on prospective validation sets were similar. At 7 days, acute kidney injury on admission, elevated LDH, tachypnea, and hyperglycemia were the strongest drivers of critical event prediction, while higher age, anion gap, and C-reactive protein were the strongest drivers of mortality prediction. CONCLUSIONS: We externally and prospectively trained and validated machine learning models for mortality and critical events for patients with COVID-19 at different time horizons. These models identified at-risk patients and uncovered underlying relationships that predicted outcomes.


Assuntos
Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/mortalidade , Aprendizado de Máquina/normas , Pneumonia Viral/diagnóstico , Pneumonia Viral/mortalidade , Injúria Renal Aguda/epidemiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Betacoronavirus , COVID-19 , Estudos de Coortes , Registros Eletrônicos de Saúde , Feminino , Mortalidade Hospitalar , Hospitalização/estatística & dados numéricos , Hospitais , Humanos , Masculino , Pessoa de Meia-Idade , Cidade de Nova Iorque/epidemiologia , Pandemias , Prognóstico , Curva ROC , Medição de Risco/métodos , Medição de Risco/normas , SARS-CoV-2 , Adulto Jovem
8.
NPJ Digit Med ; 7(1): 149, 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38844546

RESUMO

Malnutrition is a frequently underdiagnosed condition leading to increased morbidity, mortality, and healthcare costs. The Mount Sinai Health System (MSHS) deployed a machine learning model (MUST-Plus) to detect malnutrition upon hospital admission. However, in diverse patient groups, a poorly calibrated model may lead to misdiagnosis, exacerbating health care disparities. We explored the model's calibration across different variables and methods to improve calibration. Data from adult patients admitted to five MSHS hospitals from January 1, 2021 - December 31, 2022, were analyzed. We compared MUST-Plus prediction to the registered dietitian's formal assessment. Hierarchical calibration was assessed and compared between the recalibration sample (N = 49,562) of patients admitted between January 1, 2021 - December 31, 2022, and the hold-out sample (N = 17,278) of patients admitted between January 1, 2023 - September 30, 2023. Statistical differences in calibration metrics were tested using bootstrapping with replacement. Before recalibration, the overall model calibration intercept was -1.17 (95% CI: -1.20, -1.14), slope was 1.37 (95% CI: 1.34, 1.40), and Brier score was 0.26 (95% CI: 0.25, 0.26). Both weak and moderate measures of calibration were significantly different between White and Black patients and between male and female patients. Logistic recalibration significantly improved calibration of the model across race and gender in the hold-out sample. The original MUST-Plus model showed significant differences in calibration between White vs. Black patients. It also overestimated malnutrition in females compared to males. Logistic recalibration effectively reduced miscalibration across all patient subgroups. Continual monitoring and timely recalibration can improve model accuracy.

9.
Bioengineering (Basel) ; 11(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38927862

RESUMO

The decision to extubate patients on invasive mechanical ventilation is critical; however, clinician performance in identifying patients to liberate from the ventilator is poor. Machine Learning-based predictors using tabular data have been developed; however, these fail to capture the wide spectrum of data available. Here, we develop and validate a deep learning-based model using routinely collected chest X-rays to predict the outcome of attempted extubation. We included 2288 serial patients admitted to the Medical ICU at an urban academic medical center, who underwent invasive mechanical ventilation, with at least one intubated CXR, and a documented extubation attempt. The last CXR before extubation for each patient was taken and split 79/21 for training/testing sets, then transfer learning with k-fold cross-validation was used on a pre-trained ResNet50 deep learning architecture. The top three models were ensembled to form a final classifier. The Grad-CAM technique was used to visualize image regions driving predictions. The model achieved an AUC of 0.66, AUPRC of 0.94, sensitivity of 0.62, and specificity of 0.60. The model performance was improved compared to the Rapid Shallow Breathing Index (AUC 0.61) and the only identified previous study in this domain (AUC 0.55), but significant room for improvement and experimentation remains.

10.
JMIR Form Res ; 7: e42262, 2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37440303

RESUMO

BACKGROUND: Machine learning (ML)-based clinical decision support systems (CDSS) are popular in clinical practice settings but are often criticized for being limited in usability, interpretability, and effectiveness. Evaluating the implementation of ML-based CDSS is critical to ensure CDSS is acceptable and useful to clinicians and helps them deliver high-quality health care. Malnutrition is a common and underdiagnosed condition among hospital patients, which can have serious adverse impacts. Early identification and treatment of malnutrition are important. OBJECTIVE: This study aims to evaluate the implementation of an ML tool, Malnutrition Universal Screening Tool (MUST)-Plus, that predicts hospital patients at high risk for malnutrition and identify best implementation practices applicable to this and other ML-based CDSS. METHODS: We conducted a qualitative postimplementation evaluation using in-depth interviews with registered dietitians (RDs) who use MUST-Plus output in their everyday work. After coding the data, we mapped emergent themes onto select domains of the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework. RESULTS: We interviewed 17 of the 24 RDs approached (71%), representing 37% of those who use MUST-Plus output. Several themes emerged: (1) enhancements to the tool were made to improve accuracy and usability; (2) MUST-Plus helped identify patients that would not otherwise be seen; perceived usefulness was highest in the original site; (3) perceived accuracy varied by respondent and site; (4) RDs valued autonomy in prioritizing patients; (5) depth of tool understanding varied by hospital and level; (6) MUST-Plus was integrated into workflows and electronic health records; and (7) RDs expressed a desire to eventually have 1 automated screener. CONCLUSIONS: Our findings suggest that continuous involvement of stakeholders at new sites given staff turnover is vital to ensure buy-in. Qualitative research can help identify the potential bias of ML tools and should be widely used to ensure health equity. Ongoing collaboration among CDSS developers, data scientists, and clinical providers may help refine CDSS for optimal use and improve the acceptability of CDSS in the clinical context.

11.
JMIR AI ; 2: e48628, 2023 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-38875535

RESUMO

BACKGROUND: Infusion failure may have severe consequences for patients receiving critical, short-half-life infusions. Continued interruptions to infusions can lead to subtherapeutic therapy. OBJECTIVE: This study aims to identify and rank determinants of the longevity of continuous infusions administered through syringe drivers, using nonlinear predictive models. Additionally, this study aims to evaluate key factors influencing infusion longevity and develop and test a model for predicting the likelihood of achieving successful infusion longevity. METHODS: Data were extracted from the event logs of smart pumps containing information on care profiles, medication types and concentrations, occlusion alarm settings, and the final infusion cessation cause. These data were then used to fit 5 nonlinear models and evaluate the best explanatory model. RESULTS: Random forest was the best-fit predictor, with an F1-score of 80.42, compared to 5 other models (mean F1-score 75.06; range 67.48-79.63). When applied to infusion data in an individual syringe driver data set, the predictor model found that the final medication concentration and medication type were of less significance to infusion longevity compared to the rate and care unit. For low-rate infusions, rates ranging from 2 to 2.8 mL/hr performed best for achieving a balance between infusion longevity and fluid load per infusion, with an occlusion versus no-occlusion ratio of 0.553. Rates between 0.8 and 1.2 mL/hr exhibited the poorest performance with a ratio of 1.604. Higher rates, up to 4 mL/hr, performed better in terms of occlusion versus no-occlusion ratios. CONCLUSIONS: This study provides clinicians with insights into the specific types of infusion that warrant more intense observation or proactive management of intravenous access; additionally, it can offer valuable information regarding the average duration of uninterrupted infusions that can be expected in these care areas. Optimizing rate settings to improve infusion longevity for continuous infusions, achieved through compounding to create customized concentrations for individual patients, may be possible in light of the study's outcomes. The study also highlights the potential of machine learning nonlinear models in predicting outcomes and life spans of specific therapies delivered via medical devices.

12.
JMIR Form Res ; 7: e46905, 2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-37883177

RESUMO

BACKGROUND: Early prediction of the need for invasive mechanical ventilation (IMV) in patients hospitalized with COVID-19 symptoms can help in the allocation of resources appropriately and improve patient outcomes by appropriately monitoring and treating patients at the greatest risk of respiratory failure. To help with the complexity of deciding whether a patient needs IMV, machine learning algorithms may help bring more prognostic value in a timely and systematic manner. Chest radiographs (CXRs) and electronic medical records (EMRs), typically obtained early in patients admitted with COVID-19, are the keys to deciding whether they need IMV. OBJECTIVE: We aimed to evaluate the use of a machine learning model to predict the need for intubation within 24 hours by using a combination of CXR and EMR data in an end-to-end automated pipeline. We included historical data from 2481 hospitalizations at The Mount Sinai Hospital in New York City. METHODS: CXRs were first resized, rescaled, and normalized. Then lungs were segmented from the CXRs by using a U-Net algorithm. After splitting them into a training and a test set, the training set images were augmented. The augmented images were used to train an image classifier to predict the probability of intubation with a prediction window of 24 hours by retraining a pretrained DenseNet model by using transfer learning, 10-fold cross-validation, and grid search. Then, in the final fusion model, we trained a random forest algorithm via 10-fold cross-validation by combining the probability score from the image classifier with 41 longitudinal variables in the EMR. Variables in the EMR included clinical and laboratory data routinely collected in the inpatient setting. The final fusion model gave a prediction likelihood for the need of intubation within 24 hours as well. RESULTS: At a prediction probability threshold of 0.5, the fusion model provided 78.9% (95% CI 59%-96%) sensitivity, 83% (95% CI 76%-89%) specificity, 0.509 (95% CI 0.34-0.67) F1-score, 0.874 (95% CI 0.80-0.94) area under the receiver operating characteristic curve (AUROC), and 0.497 (95% CI 0.32-0.65) area under the precision recall curve (AUPRC) on the holdout set. Compared to the image classifier alone, which had an AUROC of 0.577 (95% CI 0.44-0.73) and an AUPRC of 0.206 (95% CI 0.08-0.38), the fusion model showed significant improvement (P<.001). The most important predictor variables were respiratory rate, C-reactive protein, oxygen saturation, and lactate dehydrogenase. The imaging probability score ranked 15th in overall feature importance. CONCLUSIONS: We show that, when linked with EMR data, an automated deep learning image classifier improved performance in identifying hospitalized patients with severe COVID-19 at risk for intubation. With additional prospective and external validation, such a model may assist risk assessment and optimize clinical decision-making in choosing the best care plan during the critical stages of COVID-19.

13.
RSC Adv ; 12(18): 11021-11031, 2022 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-35425030

RESUMO

Nanotechnology governance, particularly in relation to human and environmental concerns, remains a contested domain. In recent years, the creation of both a risk governance framework and council has been actively pursued. Part of the function of a governance framework is the communication to external stakeholders. Existing descriptions on the public perceptions of nanotechnology are generally positive with the attendant economic and societal benefits being forefront in that thinking. Debates on nanomaterials' risk tend to be dominated by expert groupings while the general public is largely unaware of the potential hazards. Communicating via social media has become an integral part of everyday life facilitating public connectedness around specific topics that was not feasible in the pre-digital age. When civilian passive stakeholders become active their frustration can quickly coalesce into a campaign of resistance, and once an issue starts to develop into a campaign it is difficult to ease the momentum. Simmering discussions with moderate local attention can gain international exposure resulting in pressure and it can, in some cases, quickly precipitate legislative action and/or economic consequences. This paper highlights the potential of such a runaway, twitterstorm. We conducted a sentiment analysis of tweets since 2006 focusing on silver, titanium and carbon-based nanomaterials. We further examined the sentiment expressed following the decision by the European Food Safety Authority (EFSA) to phase out the food additive titanium dioxide (E 171). Our analysis shows an engaged, attentive public, alert to announcements from industry and regulatory bodies. We demonstrate that risk governance frameworks, particularly the communication aspect of those structures must include a social media blueprint to counter misinformation and alleviate the potential impact of a social media induced regulatory and economic reaction.

14.
J Clin Med ; 11(23)2022 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-36498463

RESUMO

BACKGROUND AND AIM: We analyzed an inclusive gradient boosting model to predict hospital admission from the emergency department (ED) at different time points. We compared its results to multiple models built exclusively at each time point. METHODS: This retrospective multisite study utilized ED data from the Mount Sinai Health System, NY, during 2015-2019. Data included tabular clinical features and free-text triage notes represented using bag-of-words. A full gradient boosting model, trained on data available at different time points (30, 60, 90, 120, and 150 min), was compared to single models trained exclusively at data available at each time point. This was conducted by concatenating the rows of data available at each time point to one data matrix for the full model, where each row is considered a separate case. RESULTS: The cohort included 1,043,345 ED visits. The full model showed comparable results to the single models at all time points (AUCs 0.84-0.88 for different time points for both the full and single models). CONCLUSION: A full model trained on data concatenated from different time points showed similar results to single models trained at each time point. An ML-based prediction model can use used for identifying hospital admission.

15.
J Neurochem ; 116(6): 1043-56, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21175618

RESUMO

We have previously shown that after kindling (a model of temporal lobe epilepsy), the neuroactive steroid tetrahydrodeoxycorticosterone (THDOC) was unable to augment GABA type A receptor (GABA(A))-mediated synaptic currents occurring on pyramidal cells of the piriform cortex. Phosphorylation of GABA(A) receptors has been shown previously to alter the activity of THDOC, so we tested the hypothesis that kindling induces changes in the phosphorylation of GABA(A) receptors and this accounts for the loss in efficacy. To assay whether GABA(A) receptors are more phosphorylated after kindling, we examined the phosphorylation state of the ß3 subunit and found that it was increased. Incubation of brain slices with the protein kinase C activator phorbol 12-myristate 13-acetate (PMA) (100 nM) also increased phosphorylation in the same assay. In patch clamp, recordings from non-kindled rat brain slices PMA also reduced the activity of THDOC in a manner that was identical to what is observed after kindling. We also found that the tonic current was no longer augmented by THODC after kindling and PMA treatment. The protein kinase C (PKC) antagonist bisindolylmaleimide I blocked the effects PMA on the synaptic but not the tonic currents. However, the broad spectrum PKC antagonist staurosporine blocked the effects of PMA on the tonic currents, implying that different PKC isoforms phosphorylate GABA(A) receptors responsible for phasic and tonic currents. The phosphatase activator Li(+) palmitate restored the 'normal' activity of THDOC on synaptic currents in kindled brain slices but not the tonic currents. These data demonstrate that kindling enhances the phosphorylation state of GABA(A) receptors expressed in pyramidal neurons reducing THDOC efficacy.


Assuntos
Desoxicorticosterona/análogos & derivados , Potenciais Pós-Sinápticos Inibidores/efeitos dos fármacos , Excitação Neurológica/patologia , Neurotransmissores/farmacologia , Células Piramidais/efeitos dos fármacos , Receptores de GABA/metabolismo , Animais , Córtex Cerebral/patologia , Córtex Cerebral/fisiopatologia , Desoxicorticosterona/farmacologia , Inibidores Enzimáticos/farmacologia , Regulação da Expressão Gênica/efeitos dos fármacos , Técnicas In Vitro , Indóis/farmacologia , Masculino , Maleimidas/farmacologia , Técnicas de Patch-Clamp/métodos , Ésteres de Forbol/farmacologia , Fosforilação/efeitos dos fármacos , Fosforilação/fisiologia , Ratos , Ratos Sprague-Dawley , Receptores de GABA/genética
16.
Array (N Y) ; 11: 100075, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35083428

RESUMO

BACKGROUND: From February 2020, both urban and rural Ireland witnessed the rapid proliferation of the COVID-19 disease throughout its counties. During this period, the national COVID-19 responses included stay-at-home directives issued by the state, subject to varying levels of enforcement. METHODS: In this paper, we present a new method to assess and rank the causes of Ireland COVID-19 deaths as it relates to mobility activities within each county provided by Google while taking into consideration the epidemiological confirmed positive cases reported per county. We used a network structure and rank propagation modelling approach using Personalised PageRank to reveal the importance of each mobility category linked to cases and deaths. Then a novel feature-selection method using relative prominent factors finds important features related to each county's death. Finally, we clustered the counties based on features selected with the network results using a customised network clustering algorithm for the research problem. FINDINGS: Our analysis reveals that the most important mobility trend categories that exhibit the strongest association to COVID-19 cases and deaths include retail and recreation and workplaces. This is the first time a network structure and rank propagation modelling approach has been used to link COVID-19 data to mobility patterns. The infection determinants landscape illustrated by the network results aligns soundly with county socio-economic and demographic features. The novel feature selection and clustering method presented clusters useful to policymakers, managers of the health sector, politicians and even sociologists. Finally, each county has a different impact on the national total.

17.
Clin J Am Soc Nephrol ; 16(8): 1158-1168, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34031183

RESUMO

BACKGROUND AND OBJECTIVES: AKI treated with dialysis initiation is a common complication of coronavirus disease 2019 (COVID-19) among hospitalized patients. However, dialysis supplies and personnel are often limited. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Using data from adult patients hospitalized with COVID-19 from five hospitals from the Mount Sinai Health System who were admitted between March 10 and December 26, 2020, we developed and validated several models (logistic regression, Least Absolute Shrinkage and Selection Operator (LASSO), random forest, and eXtreme GradientBoosting [XGBoost; with and without imputation]) for predicting treatment with dialysis or death at various time horizons (1, 3, 5, and 7 days) after hospital admission. Patients admitted to the Mount Sinai Hospital were used for internal validation, whereas the other hospitals formed part of the external validation cohort. Features included demographics, comorbidities, and laboratory and vital signs within 12 hours of hospital admission. RESULTS: A total of 6093 patients (2442 in training and 3651 in external validation) were included in the final cohort. Of the different modeling approaches used, XGBoost without imputation had the highest area under the receiver operating characteristic (AUROC) curve on internal validation (range of 0.93-0.98) and area under the precision-recall curve (AUPRC; range of 0.78-0.82) for all time points. XGBoost without imputation also had the highest test parameters on external validation (AUROC range of 0.85-0.87, and AUPRC range of 0.27-0.54) across all time windows. XGBoost without imputation outperformed all models with higher precision and recall (mean difference in AUROC of 0.04; mean difference in AUPRC of 0.15). Features of creatinine, BUN, and red cell distribution width were major drivers of the model's prediction. CONCLUSIONS: An XGBoost model without imputation for prediction of a composite outcome of either death or dialysis in patients positive for COVID-19 had the best performance, as compared with standard and other machine learning models. PODCAST: This article contains a podcast at https://www.asn-online.org/media/podcast/CJASN/2021_07_09_CJN17311120.mp3.


Assuntos
Injúria Renal Aguda/terapia , COVID-19/complicações , Aprendizado de Máquina , Diálise Renal , SARS-CoV-2 , Injúria Renal Aguda/mortalidade , COVID-19/mortalidade , Hospitalização , Humanos
18.
Artigo em Inglês | MEDLINE | ID: mdl-32963059

RESUMO

OBJECTIVES: To develop and validate a model for prediction of near-term in-hospital mortality among patients with COVID-19 by application of a machine learning (ML) algorithm on time-series inpatient data from electronic health records. METHODS: A cohort comprised of 567 patients with COVID-19 at a large acute care healthcare system between 10 February 2020 and 7 April 2020 observed until either death or discharge. Random forest (RF) model was developed on randomly drawn 70% of the cohort (training set) and its performance was evaluated on the rest of 30% (the test set). The outcome variable was in-hospital mortality within 20-84 hours from the time of prediction. Input features included patients' vital signs, laboratory data and ECG results. RESULTS: Patients had a median age of 60.2 years (IQR 26.2 years); 54.1% were men. In-hospital mortality rate was 17.0% and overall median time to death was 6.5 days (range 1.3-23.0 days). In the test set, the RF classifier yielded a sensitivity of 87.8% (95% CI: 78.2% to 94.3%), specificity of 60.6% (95% CI: 55.2% to 65.8%), accuracy of 65.5% (95% CI: 60.7% to 70.0%), area under the receiver operating characteristic curve of 85.5% (95% CI: 80.8% to 90.2%) and area under the precision recall curve of 64.4% (95% CI: 53.5% to 75.3%). CONCLUSIONS: Our ML-based approach can be used to analyse electronic health record data and reliably predict near-term mortality prediction. Using such a model in hospitals could help improve care, thereby better aligning clinical decisions with prognosis in critically ill patients with COVID-19.

19.
J Clin Med ; 9(6)2020 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-32492874

RESUMO

OBJECTIVES: Approximately 20-30% of patients with COVID-19 require hospitalization, and 5-12% may require critical care in an intensive care unit (ICU). A rapid surge in cases of severe COVID-19 will lead to a corresponding surge in demand for ICU care. Because of constraints on resources, frontline healthcare workers may be unable to provide the frequent monitoring and assessment required for all patients at high risk of clinical deterioration. We developed a machine learning-based risk prioritization tool that predicts ICU transfer within 24 h, seeking to facilitate efficient use of care providers' efforts and help hospitals plan their flow of operations. METHODS: A retrospective cohort was comprised of non-ICU COVID-19 admissions at a large acute care health system between 26 February and 18 April 2020. Time series data, including vital signs, nursing assessments, laboratory data, and electrocardiograms, were used as input variables for training a random forest (RF) model. The cohort was randomly split (70:30) into training and test sets. The RF model was trained using 10-fold cross-validation on the training set, and its predictive performance on the test set was then evaluated. RESULTS: The cohort consisted of 1987 unique patients diagnosed with COVID-19 and admitted to non-ICU units of the hospital. The median time to ICU transfer was 2.45 days from the time of admission. Compared to actual admissions, the tool had 72.8% (95% CI: 63.2-81.1%) sensitivity, 76.3% (95% CI: 74.7-77.9%) specificity, 76.2% (95% CI: 74.6-77.7%) accuracy, and 79.9% (95% CI: 75.2-84.6%) area under the receiver operating characteristics curve. CONCLUSIONS: A ML-based prediction model can be used as a screening tool to identify patients at risk of imminent ICU transfer within 24 h. This tool could improve the management of hospital resources and patient-throughput planning, thus delivering more effective care to patients hospitalized with COVID-19.

20.
J Clin Med ; 9(2)2020 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-32012659

RESUMO

Early detection of patients at risk for clinical deterioration is crucial for timely intervention. Traditional detection systems rely on a limited set of variables and are unable to predict the time of decline. We describe a machine learning model called MEWS++ that enables the identification of patients at risk of escalation of care or death six hours prior to the event. A retrospective single-center cohort study was conducted from July 2011 to July 2017 of adult (age > 18) inpatients excluding psychiatric, parturient, and hospice patients. Three machine learning models were trained and tested: random forest (RF), linear support vector machine, and logistic regression. We compared the models' performance to the traditional Modified Early Warning Score (MEWS) using sensitivity, specificity, and Area Under the Curve for Receiver Operating Characteristic (AUC-ROC) and Precision-Recall curves (AUC-PR). The primary outcome was escalation of care from a floor bed to an intensive care or step-down unit, or death, within 6 h. A total of 96,645 patients with 157,984 hospital encounters and 244,343 bed movements were included. Overall rate of escalation or death was 3.4%. The RF model had the best performance with sensitivity 81.6%, specificity 75.5%, AUC-ROC of 0.85, and AUC-PR of 0.37. Compared to traditional MEWS, sensitivity increased 37%, specificity increased 11%, and AUC-ROC increased 14%. This study found that using machine learning and readily available clinical data, clinical deterioration or death can be predicted 6 h prior to the event. The model we developed can warn of patient deterioration hours before the event, thus helping make timely clinical decisions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA