Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Open Forum Infect Dis ; 11(3): ofae060, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38464488

RESUMO

Background: Reducing the burden of multidrug-resistant organism (MDRO) colonization and infection among renal transplant recipients (RTRs) may improve patient outcomes. We aimed to assess whether the detection of an MDRO or a comparable antibiotic-susceptible organism (CSO) during the early post-transplant (EPT) period was associated with graft loss and mortality among RTRs. Methods: We conducted a retrospective cohort study of RTRs transplanted between 2005 and 2021. EPT positivity was defined as a positive bacterial culture within 30 days of transplant. The incidence and prevalence of EPT MDRO detection were calculated. The primary outcome was a composite of 1-year allograft loss or mortality following transplant. Multivariable Cox hazard regression, competing risk, propensity score-weighted sensitivity, and subgroup analyses were performed. Results: Among 3507 RTRs, the prevalence of EPT MDRO detection was 1.3% (95% CI, 0.91%-1.69%) with an incidence rate per 1000 EPT-days at risk of 0.42 (95% CI, 0.31-0.57). Among RTRs who met survival analysis inclusion criteria (n = 3432), 91% (3138/3432) had no positive EPT cultures and were designated as negative controls, 8% (263/3432) had a CSO detected, and 1% (31/3432) had an MDRO detected in the EPT period. EPT MDRO detection was associated with the composite outcome (adjusted hazard ratio [aHR], 3.29; 95% CI, 1.21-8.92) and death-censored allograft loss (cause-specific aHR, 7.15; 95% CI, 0.92-55.5; subdistribution aHR, 7.15; 95% CI, 0.95-53.7). A similar trend was seen in the subgroup and sensitivity analyses. Conclusions: MDRO detection during the EPT period was associated with allograft loss, suggesting the need for increased strategies to optimize prevention of MDRO colonization and infection.

2.
Chest ; 165(3): 529-539, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37748574

RESUMO

BACKGROUND: Trajectories of bedside vital signs have been used to identify sepsis subphenotypes with distinct outcomes and treatment responses. The objective of this study was to validate the vitals trajectory model in a multicenter cohort of patients hospitalized with COVID-19 and to evaluate the clinical characteristics and outcomes of the resulting subphenotypes. RESEARCH QUESTION: Can the trajectory of routine bedside vital signs identify COVID-19 subphenotypes with distinct clinical characteristics and outcomes? STUDY DESIGN AND METHODS: The study included adult patients admitted with COVID-19 to four academic hospitals in the Emory Healthcare system between March 1, 2020, and May 31, 2022. Using a validated group-based trajectory model, we classified patients into previously defined vital sign trajectories using oral temperature, heart rate, respiratory rate, and systolic and diastolic BP measured in the first 8 h of hospitalization. Clinical characteristics, biomarkers, and outcomes were compared between subphenotypes. Heterogeneity of treatment effect to tocilizumab was evaluated. RESULTS: The 7,065 patients with hospitalized COVID-19 were classified into four subphenotypes: group A (n = 1,429, 20%)-high temperature, heart rate, respiratory rate, and hypotensive; group B (1,454, 21%)-high temperature, heart rate, respiratory rate, and hypertensive; group C (2,996, 42%)-low temperature, heart rate, respiratory rate, and normotensive; and group D (1,186, 17%)-low temperature, heart rate, respiratory rate, and hypotensive. Groups A and D had higher ORs of mechanical ventilation, vasopressors, and 30-day inpatient mortality (P < .001). On comparing patients receiving tocilizumab (n = 55) with those who met criteria for tocilizumab but were admitted before its use (n = 461), there was significant heterogeneity of treatment effect across subphenotypes in the association of tocilizumab with 30-day mortality (P = .001). INTERPRETATION: By using bedside vital signs available in even low-resource settings, we found novel subphenotypes associated with distinct manifestations of COVID-19, which could lead to preemptive and targeted treatments.


Assuntos
COVID-19 , Adulto , Humanos , COVID-19/diagnóstico , COVID-19/terapia , Biomarcadores , Respiração Artificial , Frequência Cardíaca , Sinais Vitais
3.
JAMA Netw Open ; 6(7): e2322299, 2023 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-37418261

RESUMO

Importance: Natural language processing (NLP) has the potential to enable faster treatment access by reducing clinician response time and improving electronic health record (EHR) efficiency. Objective: To develop an NLP model that can accurately classify patient-initiated EHR messages and triage COVID-19 cases to reduce clinician response time and improve access to antiviral treatment. Design, Setting, and Participants: This retrospective cohort study assessed development of a novel NLP framework to classify patient-initiated EHR messages and subsequently evaluate the model's accuracy. Included patients sent messages via the EHR patient portal from 5 Atlanta, Georgia, hospitals between March 30 and September 1, 2022. Assessment of the model's accuracy consisted of manual review of message contents to confirm the classification label by a team of physicians, nurses, and medical students, followed by retrospective propensity score-matched clinical outcomes analysis. Exposure: Prescription of antiviral treatment for COVID-19. Main Outcomes and Measures: The 2 primary outcomes were (1) physician-validated evaluation of the NLP model's message classification accuracy and (2) analysis of the model's potential clinical effect via increased patient access to treatment. The model classified messages into COVID-19-other (pertaining to COVID-19 but not reporting a positive test), COVID-19-positive (reporting a positive at-home COVID-19 test result), and non-COVID-19 (not pertaining to COVID-19). Results: Among 10 172 patients whose messages were included in analyses, the mean (SD) age was 58 (17) years; 6509 patients (64.0%) were women and 3663 (36.0%) were men. In terms of race and ethnicity, 2544 patients (25.0%) were African American or Black, 20 (0.2%) were American Indian or Alaska Native, 1508 (14.8%) were Asian, 28 (0.3%) were Native Hawaiian or other Pacific Islander, 5980 (58.8%) were White, 91 (0.9%) were more than 1 race or ethnicity, and 1 (0.01%) chose not to answer. The NLP model had high accuracy and sensitivity, with a macro F1 score of 94% and sensitivity of 85% for COVID-19-other, 96% for COVID-19-positive, and 100% for non-COVID-19 messages. Among the 3048 patient-generated messages reporting positive SARS-CoV-2 test results, 2982 (97.8%) were not documented in structured EHR data. Mean (SD) message response time for COVID-19-positive patients who received treatment (364.10 [784.47] minutes) was faster than for those who did not (490.38 [1132.14] minutes; P = .03). Likelihood of antiviral prescription was inversely correlated with message response time (odds ratio, 0.99 [95% CI, 0.98-1.00]; P = .003). Conclusions and Relevance: In this cohort study of 2982 COVID-19-positive patients, a novel NLP model classified patient-initiated EHR messages reporting positive COVID-19 test results with high sensitivity. Furthermore, when responses to patient messages occurred faster, patients were more likely to receive antiviral medical prescription within the 5-day treatment window. Although additional analysis on the effect on clinical outcomes is needed, these findings represent a possible use case for integration of NLP algorithms into clinical care.


Assuntos
COVID-19 , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , COVID-19/diagnóstico , COVID-19/epidemiologia , SARS-CoV-2 , Estudos Retrospectivos , Estudos de Coortes , Registros Eletrônicos de Saúde , Processamento de Linguagem Natural
4.
Open Forum Infect Dis ; 10(7): ofad360, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37469618

RESUMO

Background: Food insecurity has been linked to suboptimal antiretroviral therapy (ART) adherence in persons with HIV (PWH). This association has not been evaluated using tenofovir diphosphate (TFV-DP) in dried blood spots (DBSs), a biomarker of cumulative ART adherence and exposure. Methods: Within a prospective South African cohort of treatment-naive PWH initiating ART, a subset of participants with measured TFV-DP in DBS values was assessed for food insecurity status. Bivariate and multivariate median-based regression analysis compared the association between food insecurity and TFV-DP concentrations in DBSs adjusting for age, sex, ethnicity, medication possession ratio (MPR), and estimated glomerular filtration rate. Results: Drug concentrations were available for 285 study participants. Overall, 62 (22%) PWH reported worrying about food insecurity and 44 (15%) reported not having enough food to eat in the last month. The crude median concentrations of TFV-DP in DBSs differed significantly between those who expressed food insecurity worry versus those who did not (599 [interquartile range {IQR}, 417-783] fmol/punch vs 716 [IQR, 453-957] fmol/punch; P = .032). In adjusted median-based regression, those with food insecurity worry had concentrations of TFV-DP that were 155 (95% confidence interval, -275 to -35; P = .012) fmol/punch lower than those who did not report food insecurity worry. Age and MPR remained significantly associated with TFV-DP. Conclusions: In this study, food insecurity worry is associated with lower TFV-DP concentrations in South African PWH. This highlights the role of food insecurity as a social determinant of HIV outcomes including ART failure and resistance.

5.
J Am Heart Assoc ; 12(13): e030046, 2023 07 04.
Artigo em Inglês | MEDLINE | ID: mdl-37345821

RESUMO

Background The Fontan operation is associated with significant morbidity and premature mortality. Fontan cases cannot always be identified by International Classification of Diseases (ICD) codes, making it challenging to create large Fontan patient cohorts. We sought to develop natural language processing-based machine learning models to automatically detect Fontan cases from free texts in electronic health records, and compare their performances with ICD code-based classification. Methods and Results We included free-text notes of 10 935 manually validated patients, 778 (7.1%) Fontan and 10 157 (92.9%) non-Fontan, from 2 health care systems. Using 80% of the patient data, we trained and optimized multiple machine learning models, support vector machines and 2 versions of RoBERTa (a robustly optimized transformer-based model for language understanding), for automatically identifying Fontan cases based on notes. For RoBERTa, we implemented a novel sliding window strategy to overcome its length limit. We evaluated the machine learning models and ICD code-based classification on 20% of the held-out patient data using the F1 score metric. The ICD classification model, support vector machine, and RoBERTa achieved F1 scores of 0.81 (95% CI, 0.79-0.83), 0.95 (95% CI, 0.92-0.97), and 0.89 (95% CI, 0.88-0.85) for the positive (Fontan) class, respectively. Support vector machines obtained the best performance (P<0.05), and both natural language processing models outperformed ICD code-based classification (P<0.05). The sliding window strategy improved performance over the base model (P<0.05) but did not outperform support vector machines. ICD code-based classification produced more false positives. Conclusions Natural language processing models can automatically detect Fontan patients based on clinical notes with higher accuracy than ICD codes, and the former demonstrated the possibility of further improvement.


Assuntos
Classificação Internacional de Doenças , Processamento de Linguagem Natural , Humanos , Aprendizado de Máquina , Registros Eletrônicos de Saúde , Eletrônica
6.
J Am Med Inform Assoc ; 30(6): 1158-1166, 2023 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-37043759

RESUMO

OBJECTIVE: Severe infection can lead to organ dysfunction and sepsis. Identifying subphenotypes of infected patients is essential for personalized management. It is unknown how different time series clustering algorithms compare in identifying these subphenotypes. MATERIALS AND METHODS: Patients with suspected infection admitted between 2014 and 2019 to 4 hospitals in Emory healthcare were included, split into separate training and validation cohorts. Dynamic time warping (DTW) was applied to vital signs from the first 8 h of hospitalization, and hierarchical clustering (DTW-HC) and partition around medoids (DTW-PAM) were used to cluster patients into subphenotypes. DTW-HC, DTW-PAM, and a previously published group-based trajectory model (GBTM) were evaluated for agreement in subphenotype clusters, trajectory patterns, and subphenotype associations with clinical outcomes and treatment responses. RESULTS: There were 12 473 patients in training and 8256 patients in validation cohorts. DTW-HC, DTW-PAM, and GBTM models resulted in 4 consistent vitals trajectory patterns with significant agreement in clustering (71-80% agreement, P < .001): group A was hyperthermic, tachycardic, tachypneic, and hypotensive. Group B was hyperthermic, tachycardic, tachypneic, and hypertensive. Groups C and D had lower temperatures, heart rates, and respiratory rates, with group C normotensive and group D hypotensive. Group A had higher odds ratio of 30-day inpatient mortality (P < .01) and group D had significant mortality benefit from balanced crystalloids compared to saline (P < .01) in all 3 models. DISCUSSION: DTW- and GBTM-based clustering algorithms applied to vital signs in infected patients identified consistent subphenotypes with distinct clinical outcomes and treatment responses. CONCLUSION: Time series clustering with distinct computational approaches demonstrate similar performance and significant agreement in the resulting subphenotypes.


Assuntos
Algoritmos , Febre , Humanos , Fatores de Tempo , Análise por Conglomerados , Pacientes
7.
Infect Control Hosp Epidemiol ; 44(7): 1085-1092, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36102331

RESUMO

OBJECTIVE: We evaluated the impact of test-order frequency per diarrheal episodes on Clostridioides difficile infection (CDI) incidence estimates in a sample of hospitals at 2 CDC Emerging Infections Program (EIP) sites. DESIGN: Observational survey. SETTING: Inpatients at 5 acute-care hospitals in Rochester, New York, and Atlanta, Georgia, during two 10-workday periods in 2020 and 2021. OUTCOMES: We calculated diarrhea incidence, testing frequency, and CDI positivity (defined as any positive NAAT test) across strata. Predictors of CDI testing and positivity were assessed using modified Poisson regression. Population estimates of incidence using modified Emerging Infections Program methodology were compared between sites using the Mantel-Hanzel summary rate ratio. RESULTS: Surveillance of 38,365 patient days identified 860 diarrhea cases from 107 patient-care units mapped to 26 unique NHSN defined location types. Incidence of diarrhea was 22.4 of 1,000 patient days (medians, 25.8 for Rochester and 16.2 for Atlanta; P < .01). Similar proportions of diarrhea cases were hospital onset (66%) at both sites. Overall, 35% of patients with diarrhea were tested for CDI, but this differed by site: 21% in Rochester and 49% in Atlanta (P < .01). Regression models identified location type (ie, oncology or critical care) and laxative use predictive of CDI test ordering. Adjusting for these factors, CDI testing was 49% less likely in Rochester than Atlanta (adjusted rate ratio, 0.51; 95% confidence interval [CI], 0.40-0.63). Population estimates in Rochester had a 38% lower incidence of CDI than Atlanta (summary rate ratio, 0.62; 95% CI, 0.54-0.71). CONCLUSION: Accounting for patient-specific factors that influence CDI test ordering, differences in testing practices between sites remain and likely contribute to regional differences in surveillance estimates.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecção Hospitalar , Humanos , Pacientes Internados , Georgia/epidemiologia , New York/epidemiologia , Hospitais , Diarreia/diagnóstico , Diarreia/epidemiologia , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/epidemiologia , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Inquéritos e Questionários
8.
Intensive Care Med ; 48(11): 1582-1592, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36152041

RESUMO

PURPOSE: Sepsis is a heterogeneous syndrome and identification of sub-phenotypes is essential. This study used trajectories of vital signs to develop and validate sub-phenotypes and investigated the interaction of sub-phenotypes with treatment using randomized controlled trial data. METHODS: All patients with suspected infection admitted to four academic hospitals in Emory Healthcare between 2014-2017 (training cohort) and 2018-2019 (validation cohort) were included. Group-based trajectory modeling was applied to vital signs from the first 8 h of hospitalization to develop and validate vitals trajectory sub-phenotypes. The associations between sub-phenotypes and outcomes were evaluated in patients with sepsis. The interaction between sub-phenotype and treatment with balanced crystalloids versus saline was tested in a secondary analysis of SMART (Isotonic Solutions and Major Adverse Renal Events Trial). RESULTS: There were 12,473 patients with suspected infection in training and 8256 patients in validation cohorts, and 4 vitals trajectory sub-phenotypes were found. Group A (N = 3483, 28%) were hyperthermic, tachycardic, tachypneic, and hypotensive. Group B (N = 1578, 13%) were hyperthermic, tachycardic, tachypneic (not as pronounced as Group A) and hypertensive. Groups C (N = 4044, 32%) and D (N = 3368, 27%) had lower temperatures, heart rates, and respiratory rates, with Group C normotensive and Group D hypotensive. In the 6,919 patients with sepsis, Groups A and B were younger while Groups C and D were older. Group A had the lowest prevalence of congestive heart failure, hypertension, diabetes mellitus, and chronic kidney disease, while Group B had the highest prevalence. Groups A and D had the highest vasopressor use (p < 0.001 for all analyses above). In logistic regression, 30-day mortality was significantly higher in Groups A and D (p < 0.001 and p = 0.03, respectively). In the SMART trial, sub-phenotype significantly modified treatment effect (p = 0.03). Group D had significantly lower odds of mortality with balanced crystalloids compared to saline (odds ratio (OR) 0.39, 95% confidence interval (CI) 0.23-0.67, p < 0.001). CONCLUSION: Sepsis sub-phenotypes based on vital sign trajectory were consistent across cohorts, had distinct outcomes, and different responses to treatment with balanced crystalloids versus saline.


Assuntos
Sepse , Humanos , Mortalidade Hospitalar , Soluções Cristaloides , Soluções Isotônicas , Sepse/diagnóstico , Sepse/terapia , Sinais Vitais
9.
Physiol Meas ; 43(8)2022 08 26.
Artigo em Inglês | MEDLINE | ID: mdl-35815673

RESUMO

Objective.The standard twelve-lead electrocardiogram (ECG) is a widely used tool for monitoring cardiac function and diagnosing cardiac disorders. The development of smaller, lower-cost, and easier-to-use ECG devices may improve access to cardiac care in lower-resource environments, but the diagnostic potential of these devices is unclear. This work explores these issues through a public competition: the 2021 PhysioNet Challenge. In addition, we explore the potential for performance boosting through a meta-learning approach.Approach.We sourced 131,149 twelve-lead ECG recordings from ten international sources. We posted 88,253 annotated recordings as public training data and withheld the remaining recordings as hidden validation and test data. We challenged teams to submit containerized, open-source algorithms for diagnosing cardiac abnormalities using various ECG lead combinations, including the code for training their algorithms. We designed and scored the algorithms using an evaluation metric that captures the risks of different misdiagnoses for 30 conditions. After the Challenge, we implemented a semi-consensus voting model on all working algorithms.Main results.A total of 68 teams submitted 1,056 algorithms during the Challenge, providing a variety of automated approaches from both academia and industry. The performance differences across the different lead combinations were smaller than the performance differences across the different test databases, showing that generalizability posed a larger challenge to the algorithms than the choice of ECG leads. A voting model improved performance by 3.5%.Significance.The use of different ECG lead combinations allowed us to assess the diagnostic potential of reduced-lead ECG recordings, and the use of different data sources allowed us to assess the generalizability of the algorithms to diverse institutions and populations. The submission of working, open-source code for both training and testing and the use of a novel evaluation metric improved the reproducibility, generalizability, and applicability of the research conducted during the Challenge.


Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Algoritmos , Bases de Dados Factuais , Eletrocardiografia/métodos , Reprodutibilidade dos Testes
10.
J Infect Dis ; 226(9): 1577-1587, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-35877413

RESUMO

Detecting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is essential for diagnosis, treatment, and infection control. Polymerase chain reaction (PCR) fails to distinguish acute from resolved infections, as RNA is frequently detected after infectiousness. We hypothesized that nucleocapsid in blood marks acute infection with the potential to enhance isolation and treatment strategies. In a retrospective serosurvey of inpatient and outpatient encounters, we categorized samples along an infection timeline using timing of SARS-CoV-2 testing and symptomatology. Among 1860 specimens from 1607 patients, the highest levels and frequency of antigenemia were observed in samples from acute SARS-CoV-2 infection. Antigenemia was higher in seronegative individuals and in those with severe disease. In our analysis, antigenemia exhibited 85.8% sensitivity and 98.6% specificity as a biomarker for acute coronavirus disease 2019 (COVID-19). Thus, antigenemia sensitively and specifically marks acute SARS-CoV-2 infection. Further study is warranted to determine whether antigenemia may aid individualized assessment of active COVID-19.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Teste para COVID-19 , Estudos Retrospectivos , Sensibilidade e Especificidade , Nucleocapsídeo , Biomarcadores
11.
J Am Coll Emerg Physicians Open ; 3(2): e12695, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35434709

RESUMO

Background: Prior data has demonstrated increased mortality in hospitalized patients with acute heart failure (AHF) and troponin elevation. No data has specifically examined the prognostic significance of troponin elevation in patients with AHF discharged after emergency department (ED) management. Objective: Evaluate the relationship between troponin elevation and outcomes in patients with AHF who are treated and released from the ED. Methods: This was a secondary analysis of the Get with the Guidelines to Reduce Disparities in AHF Patients Discharged from the ED (GUIDED-HF) trial, a randomized, controlled trial of ED patients with AHF who were discharged. Patients with elevated conventional troponin not due to acute coronary syndrome (ACS) were included. Our primary outcome was a composite endpoint: time to 30-day cardiovascular death and/or heart failure-related events. Results: Of the 491 subjects included in the GUIDED-HF trial, 418 had troponin measured during the ED evaluation and 66 (16%) had troponin values above the 99th percentile. Median age was 63 years (interquartile range, 54-70), 62% (n = 261) were male, 63% (n = 265) were Black, and 16% (n = 67) experienced our primary outcome. There were no differences in our primary outcome between those with and without troponin elevation (12/66, 18.1% vs 55/352, 15.6%; P = 0.60). This effect was maintained regardless of assignment to usual care or the intervention arm. In multivariable regression analysis, there was no association between our primary outcome and elevated troponin (hazard ratio, 1.00; 95% confidence interval,  0.49-2.01, P = 0.994). Conclusion: If confirmed in a larger cohort, these findings may facilitate safe ED discharge for a group of patients with AHF without ACS when an elevated troponin is the primary reason for admission.

12.
Crit Care Med ; 50(2): 212-223, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35100194

RESUMO

OBJECTIVES: Body temperature trajectories of infected patients are associated with specific immune profiles and survival. We determined the association between temperature trajectories and distinct manifestations of coronavirus disease 2019. DESIGN: Retrospective observational study. SETTING: Four hospitals within an academic healthcare system from March 2020 to February 2021. PATIENTS: All adult patients hospitalized with coronavirus disease 2019. INTERVENTIONS: Using a validated group-based trajectory model, we classified patients into four previously defined temperature trajectory subphenotypes using oral temperature measurements from the first 72 hours of hospitalization. Clinical characteristics, biomarkers, and outcomes were compared between subphenotypes. MEASUREMENTS AND MAIN RESULTS: The 5,903 hospitalized coronavirus disease 2019 patients were classified into four subphenotypes: hyperthermic slow resolvers (n = 1,452, 25%), hyperthermic fast resolvers (1,469, 25%), normothermics (2,126, 36%), and hypothermics (856, 15%). Hypothermics had abnormal coagulation markers, with the highest d-dimer and fibrin monomers (p < 0.001) and the highest prevalence of cerebrovascular accidents (10%, p = 0.001). The prevalence of venous thromboembolism was significantly different between subphenotypes (p = 0.005), with the highest rate in hypothermics (8.5%) and lowest in hyperthermic slow resolvers (5.1%). Hyperthermic slow resolvers had abnormal inflammatory markers, with the highest C-reactive protein, ferritin, and interleukin-6 (p < 0.001). Hyperthermic slow resolvers had increased odds of mechanical ventilation, vasopressors, and 30-day inpatient mortality (odds ratio, 1.58; 95% CI, 1.13-2.19) compared with hyperthermic fast resolvers. Over the course of the pandemic, we observed a drastic decrease in the prevalence of hyperthermic slow resolvers, from representing 53% of admissions in March 2020 to less than 15% by 2021. We found that dexamethasone use was associated with significant reduction in probability of hyperthermic slow resolvers membership (27% reduction; 95% CI, 23-31%; p < 0.001). CONCLUSIONS: Hypothermics had abnormal coagulation markers, suggesting a hypercoagulable subphenotype. Hyperthermic slow resolvers had elevated inflammatory markers and the highest odds of mortality, suggesting a hyperinflammatory subphenotype. Future work should investigate whether temperature subphenotypes benefit from targeted antithrombotic and anti-inflammatory strategies.


Assuntos
Temperatura Corporal , COVID-19/patologia , Hipertermia/patologia , Hipotermia/patologia , Fenótipo , Centros Médicos Acadêmicos , Idoso , Anti-Inflamatórios/uso terapêutico , Biomarcadores/sangue , Coagulação Sanguínea , Estudos de Coortes , Dexametasona/uso terapêutico , Feminino , Humanos , Inflamação , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Estudos Retrospectivos , SARS-CoV-2
13.
Anesth Analg ; 134(2): 380-388, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34673658

RESUMO

BACKGROUND: The retrospective analysis of electroencephalogram (EEG) signals acquired from patients under general anesthesia is crucial in understanding the patient's unconscious brain's state. However, the creation of such database is often tedious and cumbersome and involves human labor. Hence, we developed a Raspberry Pi-based system for archiving EEG signals recorded from patients under anesthesia in operating rooms (ORs) with minimal human involvement. METHODS: Using this system, we archived patient EEG signals from over 500 unique surgeries at the Emory University Orthopaedics and Spine Hospital, Atlanta, for about 18 months. For this, we developed a software package that runs on a Raspberry Pi and archives patient EEG signals from a SedLine Root EEG Monitor (Masimo) to a secure Health Insurance Portability and Accountability Act (HIPAA) compliant cloud storage. The OR number corresponding to each surgery was archived along with the EEG signal to facilitate retrospective EEG analysis. We retrospectively processed the archived EEG signals and performed signal quality checks. We also proposed a formula to compute the proportion of true EEG signal and calculated the corresponding statistics. Further, we curated and interleaved patient medical record information with the corresponding EEG signals. RESULTS: We retrospectively processed the EEG signals to demonstrate a statistically significant negative correlation between the relative alpha power (8-12 Hz) of the EEG signal captured under anesthesia and the patient's age. CONCLUSIONS: Our system is a standalone EEG archiver developed using low cost and readily available hardware. We demonstrated that one could create a large-scale EEG database with minimal human involvement. Moreover, we showed that the captured EEG signal is of good quality for retrospective analysis and combined the EEG signal with the patient medical records. This project's software has been released under an open-source license to enable others to use and contribute.


Assuntos
Curadoria de Dados/métodos , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Monitorização Intraoperatória/instrumentação , Monitorização Intraoperatória/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Gerenciamento de Dados/instrumentação , Gerenciamento de Dados/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
14.
Crit Care Med ; 50(2): 245-255, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34259667

RESUMO

OBJECTIVES: To determine the association between time period of hospitalization and hospital mortality among critically ill adults with coronavirus disease 2019. DESIGN: Observational cohort study from March 6, 2020, to January 31, 2021. SETTING: ICUs at four hospitals within an academic health center network in Atlanta, GA. PATIENTS: Adults greater than or equal to 18 years with coronavirus disease 2019 admitted to an ICU during the study period (i.e., Surge 1: March to April, Lull 1: May to June, Surge 2: July to August, Lull 2: September to November, Surge 3: December to January). MEASUREMENTS AND MAIN RESULTS: Among 1,686 patients with coronavirus disease 2019 admitted to an ICU during the study period, all-cause hospital mortality was 29.7%. Mortality differed significantly over time: 28.7% in Surge 1, 21.3% in Lull 1, 25.2% in Surge 2, 30.2% in Lull 2, 34.7% in Surge 3 (p = 0.007). Mortality was significantly associated with 1) preexisting risk factors (older age, race, ethnicity, lower body mass index, higher Elixhauser Comorbidity Index, admission from a nursing home); 2) clinical status at ICU admission (higher Sequential Organ Failure Assessment score, higher d-dimer, higher C-reactive protein); and 3) ICU interventions (receipt of mechanical ventilation, vasopressors, renal replacement therapy, inhaled vasodilators). After adjusting for baseline and clinical variables, there was a significantly increased risk of mortality associated with admission during Lull 2 (relative risk, 1.37 [95% CI = 1.03-1.81]) and Surge 3 (relative risk, 1.35 [95% CI = 1.04-1.77]) as compared to Surge 1. CONCLUSIONS: Despite increased experience and evidence-based treatments, the risk of death for patients admitted to the ICU with coronavirus disease 2019 was highest during the fall and winter of 2020. Reasons for this increased mortality are not clear.


Assuntos
COVID-19/mortalidade , Mortalidade Hospitalar/tendências , Hospitalização/tendências , Unidades de Terapia Intensiva/tendências , SARS-CoV-2 , Centros Médicos Acadêmicos , Idoso , Estudos de Coortes , Estado Terminal , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo
16.
Crit Care Explor ; 3(5): e0402, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34079945

RESUMO

BACKGROUND: Acute respiratory failure occurs frequently in hospitalized patients and often begins outside the ICU, associated with increased length of stay, cost, and mortality. Delays in decompensation recognition are associated with worse outcomes. OBJECTIVES: The objective of this study is to predict acute respiratory failure requiring any advanced respiratory support (including noninvasive ventilation). With the advent of the coronavirus disease pandemic, concern regarding acute respiratory failure has increased. DERIVATION COHORT: All admission encounters from January 2014 to June 2017 from three hospitals in the Emory Healthcare network (82,699). VALIDATION COHORT: External validation cohort: all admission encounters from January 2014 to June 2017 from a fourth hospital in the Emory Healthcare network (40,143). Temporal validation cohort: all admission encounters from February to April 2020 from four hospitals in the Emory Healthcare network coronavirus disease tested (2,564) and coronavirus disease positive (389). PREDICTION MODEL: All admission encounters had vital signs, laboratory, and demographic data extracted. Exclusion criteria included invasive mechanical ventilation started within the operating room or advanced respiratory support within the first 8 hours of admission. Encounters were discretized into hour intervals from 8 hours after admission to discharge or advanced respiratory support initiation and binary labeled for advanced respiratory support. Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment, our eXtreme Gradient Boosting-based algorithm, was compared against Modified Early Warning Score. RESULTS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment had significantly better discrimination than Modified Early Warning Score (area under the receiver operating characteristic curve 0.85 vs 0.57 [test], 0.84 vs 0.61 [external validation]). Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment maintained a positive predictive value (0.31-0.21) similar to that of Modified Early Warning Score greater than 4 (0.29-0.25) while identifying 6.62 (validation) to 9.58 (test) times more true positives. Furthermore, Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment performed more effectively in temporal validation (area under the receiver operating characteristic curve 0.86 [coronavirus disease tested], 0.93 [coronavirus disease positive]), while achieving identifying 4.25-4.51× more true positives. CONCLUSIONS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment is more effective than Modified Early Warning Score in predicting respiratory failure requiring advanced respiratory support at external validation and in coronavirus disease 2019 patients. Silent prospective validation necessary before local deployment.

17.
J Occup Environ Med ; 63(10): 839-846, 2021 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-34091579

RESUMO

OBJECTIVE: To evaluate the associations between frequency of business travel and health behaviors and adiposity. METHODS: Retrospective cross-sectional analysis of de-identified electronic medical records from 795 corporate physical exams. RESULTS: Business travel frequency demonstrates a curvilinear relationship with body mass index and body composition in men and women, with domestic and international travel. Linear and quadratic term beta coefficients indicate stronger associations between the sum of domestic and international travel and BMI, body fat percentage, and visceral adipose tissue in women than men, after accounting for age, exercise, and sleep. Based on our male sample population, international travel frequency has a greater influence on adiposity than summed (mostly domestic) travel. CONCLUSIONS: Frequent business travel adversely affects body composition, with differences by gender and type of travel.


Assuntos
Adiposidade , Obesidade , Índice de Massa Corporal , Estudos Transversais , Feminino , Comportamentos Relacionados com a Saúde , Humanos , Masculino , Estudos Retrospectivos
18.
Am J Health Syst Pharm ; 78(18): 1681-1690, 2021 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-33954428

RESUMO

PURPOSE: We evaluated a previously published risk model (Novant model) to identify patients at risk for healthcare facility-onset Clostridioides difficile infection (HCFO-CDI) at 2 hospitals within a large health system and compared its predictive value to that of a new model developed based on local findings. METHODS: We conducted a retrospective case-control study including adult patients admitted from July 1, 2016, to July 1, 2018. Patients with HCFO-CDI who received systemic antibiotics were included as cases and were matched 1 to 1 with controls (who received systemic antibiotics without developing HCFO-CDI). We extracted chart data on patient risk factors for CDI, including those identified in prior studies and those included in the Novant model. We applied the Novant model to our patient population to assess the model's utility and generated a local model using logistic regression-based prediction scores. A receiver operating characteristic area under the curve (ROC-AUC) score was determined for each model. RESULTS: We included 362 patients, with 161 controls and 161 cases. The Novant model had a ROC-AUC of 0.62 in our population. Our local model using risk factors identifiable at hospital admission included hospitalization within 90 days of admission (adjusted odds ratio [OR], 3.52; 95% confidence interval [CI], 2.06-6.04), hematologic malignancy (adjusted OR, 12.87; 95% CI, 3.70-44.80), and solid tumor malignancy (adjusted OR, 4.76; 95% CI, 1.27-17.80) as HCFO-CDI predictors and had a ROC-AUC score of 0.74. CONCLUSION: The Novant model evaluating risk factors identifiable at admission poorly predicted HCFO-CDI in our population, while our local model was a fair predictor. These findings highlight the need for institutions to review local risk factors to adjust modeling for their patient population.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecção Hospitalar , Adulto , Estudos de Casos e Controles , Clostridioides , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/epidemiologia , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Atenção à Saúde , Humanos , Estudos Retrospectivos , Medição de Risco
20.
JAMA Cardiol ; 6(2): 200-208, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33206126

RESUMO

Importance: Up to 20% of patients who present to the emergency department (ED) with acute heart failure (AHF) are discharged without hospitalization. Compared with rates in hospitalized patients, readmission and mortality are worse for ED patients. Objective: To assess the impact of a self-care intervention on 90-day outcomes in patients with AHF who are discharged from the ED. Design, Setting, and Participants: Get With the Guidelines in Emergency Department Patients With Heart Failure was an unblinded, parallel-group, multicenter randomized trial. Patients were randomized 1:1 to usual care vs a tailored self-care intervention. Patients with AHF discharged after ED-based management at 15 geographically diverse EDs were included. The trial was conducted from October 28, 2015, to September 5, 2019. Interventions: Home visit within 7 days of discharge and twice-monthly telephone-based self-care coaching for 3 months. Main Outcomes and Measures: The primary outcome was a global rank of cardiovascular death, HF-related events (unscheduled clinic visit due to HF, ED revisit, or hospitalization), and changes in the Kansas City Cardiomyopathy Questionnaire-12 (KCCQ-12) summary score (SS) at 90 days. Key secondary outcomes included the global rank outcome at 30 days and changes in the KCCQ-12 SS score at 30 and 90 days. Intention-to-treat analysis was performed for the primary, secondary, and safety outcomes. Per-protocol analysis was conducted including patients who completed a home visit and had scheduled outpatient follow-up in the intervention arm. Results: Owing to slow enrollment, 479 of a planned 700 patients were randomized: 235 to the intervention arm and 244 to the usual care arm. The median age was 63.0 years (interquartile range, 54.7-70.2), 302 patients (63%) were African American, 305 patients (64%) were men, and 178 patients (37%) had a previous ejection fraction greater than 50%. There was no significant difference in the primary outcome between patients in the intervention vs usual care arm (hazard ratio [HR], 0.89; 95% CI, 0.73-1.10; P = .28). At day 30, patients in the intervention arm had significantly better global rank (HR, 0.80; 95% CI, 0.64-0.99; P = .04) and a 5.5-point higher KCCQ-12 SS (95% CI, 1.3-9.7; P = .01), while at day 90, the KCCQ-12 SS was 2.7 points higher (95% CI, -1.9 to 7.2; P = .25). Conclusions and Relevance: The self-care intervention did not improve the primary global rank outcome at 90 days in this trial. However, benefit was observed in the global rank and KCCQ-12 SS at 30 days, suggesting that an early benefit of a tailored self-care program initiated at an ED visit for AHF was not sustained through 90 days. Trial Registration: ClinicalTrials.gov Identifier: NCT02519283.


Assuntos
Assistência Ambulatorial , Doenças Cardiovasculares/mortalidade , Serviço Hospitalar de Emergência , Insuficiência Cardíaca/terapia , Alta do Paciente , Qualidade de Vida , Autocuidado/métodos , Doença Aguda , Idoso , Feminino , Insuficiência Cardíaca/fisiopatologia , Hospitalização/estatística & dados numéricos , Visita Domiciliar , Humanos , Masculino , Pessoa de Meia-Idade , Telemedicina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA