Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 113
Filtrar
Más filtros

Base de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Crit Care Explor ; 6(8): e1131, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39132980

RESUMEN

BACKGROUND: Surrogates, proxies, and clinicians making shared treatment decisions for patients who have lost decision-making capacity often fail to honor patients' wishes, due to stress, time pressures, misunderstanding patient values, and projecting personal biases. Advance directives intend to align care with patient values but are limited by low completion rates and application to only a subset of medical decisions. Here, we investigate the potential of large language models (LLMs) to incorporate patient values in supporting critical care clinical decision-making for incapacitated patients in a proof-of-concept study. METHODS: We simulated text-based scenarios for 50 decisionally incapacitated patients for whom a medical condition required imminent clinical decisions regarding specific interventions. For each patient, we also simulated five unique value profiles captured using alternative formats: numeric ranking questionnaires, text-based questionnaires, and free-text narratives. We used pre-trained generative LLMs for two tasks: 1) text extraction of the treatments under consideration and 2) prompt-based question-answering to generate a recommendation in response to the scenario information, extracted treatment, and patient value profiles. Model outputs were compared with adjudications by three domain experts who independently evaluated each scenario and decision. RESULTS AND CONCLUSIONS: Automated extractions of the treatment in question were accurate for 88% (n = 44/50) of scenarios. LLM treatment recommendations received an average Likert score by the adjudicators of 3.92 of 5.00 (five being best) across all patients for being medically plausible and reasonable treatment recommendations, and 3.58 of 5.00 for reflecting the documented values of the patient. Scores were highest when patient values were captured as short, unstructured, and free-text narratives based on simulated patient profiles. This proof-of-concept study demonstrates the potential for LLMs to function as support tools for surrogates, proxies, and clinicians aiming to honor the wishes and values of decisionally incapacitated patients.


Asunto(s)
Apoderado , Humanos , Directivas Anticipadas , Toma de Decisiones , Toma de Decisiones Clínicas/métodos , Prueba de Estudio Conceptual , Encuestas y Cuestionarios , Lenguaje , Cuidados Críticos/métodos
2.
Implement Sci ; 19(1): 57, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103955

RESUMEN

BACKGROUND: Venous thromboembolism (VTE) is a preventable medical condition which has substantial impact on patient morbidity, mortality, and disability. Unfortunately, adherence to the published best practices for VTE prevention, based on patient centered outcomes research (PCOR), is highly variable across U.S. hospitals, which represents a gap between current evidence and clinical practice leading to adverse patient outcomes. This gap is especially large in the case of traumatic brain injury (TBI), where reluctance to initiate VTE prevention due to concerns for potentially increasing the rates of intracranial bleeding drives poor rates of VTE prophylaxis. This is despite research which has shown early initiation of VTE prophylaxis to be safe in TBI without increased risk of delayed neurosurgical intervention or death. Clinical decision support (CDS) is an indispensable solution to close this practice gap; however, design and implementation barriers hinder CDS adoption and successful scaling across health systems. Clinical practice guidelines (CPGs) informed by PCOR evidence can be deployed using CDS systems to improve the evidence to practice gap. In the Scaling AcceptabLE cDs (SCALED) study, we will implement a VTE prevention CPG within an interoperable CDS system and evaluate both CPG effectiveness (improved clinical outcomes) and CDS implementation. METHODS: The SCALED trial is a hybrid type 2 randomized stepped wedge effectiveness-implementation trial to scale the CDS across 4 heterogeneous healthcare systems. Trial outcomes will be assessed using the RE2-AIM planning and evaluation framework. Efforts will be made to ensure implementation consistency. Nonetheless, it is expected that CDS adoption will vary across each site. To assess these differences, we will evaluate implementation processes across trial sites using the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework (a determinant framework) using mixed-methods. Finally, it is critical that PCOR CPGs are maintained as evidence evolves. To date, an accepted process for evidence maintenance does not exist. We will pilot a "Living Guideline" process model for the VTE prevention CDS system. DISCUSSION: The stepped wedge hybrid type 2 trial will provide evidence regarding the effectiveness of CDS based on the Berne-Norwood criteria for VTE prevention in patients with TBI. Additionally, it will provide evidence regarding a successful strategy to scale interoperable CDS systems across U.S. healthcare systems, advancing both the fields of implementation science and health informatics. TRIAL REGISTRATION: Clinicaltrials.gov - NCT05628207. Prospectively registered 11/28/2022, https://classic. CLINICALTRIALS: gov/ct2/show/NCT05628207 .


Asunto(s)
Lesiones Traumáticas del Encéfalo , Sistemas de Apoyo a Decisiones Clínicas , Tromboembolia Venosa , Humanos , Tromboembolia Venosa/prevención & control , Tromboembolia Venosa/etiología , Lesiones Traumáticas del Encéfalo/complicaciones , Guías de Práctica Clínica como Asunto , Ciencia de la Implementación , Adhesión a Directriz
3.
J Surg Res ; 301: 269-279, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38986192

RESUMEN

INTRODUCTION: The Traumatic Brain Injury - Patient Reported Outcome (TBI-PRO) model was previously derived to predict long-term patient satisfaction as assessed by the Quality of Life After Brain Injury (QOLIBRI) score. The aim of this study is to externally and prospectively validate the TBI-PRO model to predict long-term patient-reported outcomes and to derive a new model using a larger dataset of older adults with TBI. METHODS: Patients admitted to a Level I trauma center with TBI were prospectively followed for 1 y after injury. Outcomes predicted by the TBI-PRO model based on admission findings were compared to actual QOLIBRI scores reported by patients at 3,6, and 12 mo. When deriving a new model, Collaborative European NeuroTrauma Effectiveness Research in TBI and the Transforming Research and Clinical Knowledge in Traumatic Brain Injury databases were used to identify older adults (≥50 y) with TBI from 2014 to 2018. Bayesian additive regression trees were used to identify predictive admission covariates. The coefficient of determination was used to identify the fitness of the model. RESULTS: For prospective validation, a total of 140 patients were assessed at 3 mo, with follow-up from 69 patients at 6 mo and 13 patients at 12 mo postinjury. The area under receiver operating curve of the TBI-PRO model for predicting favorable outcomes at 3, 6, and 12 mo were 0.65, 0.57, and 0.62, respectively. When attempting to derive a novel predictive model, a total of 1521 patients (80%) was used in the derivation dataset while 384 (20%) were used in the validation dataset. A past medical history of heart conditions, initial hospital length of stay, admission systolic blood pressure, age, number of reactive pupils on admission, and the need for craniectomy were most predictive of long-term QOLIBRI-Overall Scale. The coefficient of determination for the validation model including only the most predictive variables were 0.28, 0.19, and 0.27 at 3, 6, and 12 mo, respectively. CONCLUSIONS: In the present study, the prospective validation of a previously derived TBI-PRO model failed to accurately predict a long-term patient reported outcome measures in TBI. Additionally, the derivation of a novel model in older adults using a larger database showed poor accuracy in predicting long-term health-related quality of life. This study demonstrates limitations to current targeted approaches in TBI care. This study provides a framework for future studies and more targeted datasets looking to assess long-term quality of life based upon early hospital variables and can serve as a starting point for future predictive analysis.

5.
Crit Care Med ; 52(9): e439-e449, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38832836

RESUMEN

OBJECTIVES: To develop an electronic descriptor of clinical deterioration for hospitalized patients that predicts short-term mortality and identifies patient deterioration earlier than current standard definitions. DESIGN: A retrospective study using exploratory record review, quantitative analysis, and regression analyses. SETTING: Twelve-hospital community-academic health system. PATIENTS: All adult patients with an acute hospital encounter between January 1, 2018, and December 31, 2022. INTERVENTIONS: Not applicable. MEASUREMENTS AND MAIN RESULTS: Clinical trigger events were selected and used to create a revised electronic definition of deterioration, encompassing signals of respiratory failure, bleeding, and hypotension occurring in proximity to ICU transfer. Patients meeting the revised definition were 12.5 times more likely to die within 7 days (adjusted odds ratio 12.5; 95% CI, 8.9-17.4) and had a 95.3% longer length of stay (95% CI, 88.6-102.3%) compared with those who were transferred to the ICU or died regardless of meeting the revised definition. Among the 1812 patients who met the revised definition of deterioration before ICU transfer (52.4%), the median detection time was 157.0 min earlier (interquartile range 64.0-363.5 min). CONCLUSIONS: The revised definition of deterioration establishes an electronic descriptor of clinical deterioration that is strongly associated with short-term mortality and length of stay and identifies deterioration over 2.5 hours earlier than ICU transfer. Incorporating the revised definition of deterioration into the training and validation of early warning system algorithms may enhance their timeliness and clinical accuracy.


Asunto(s)
Deterioro Clínico , Unidades de Cuidados Intensivos , Transferencia de Pacientes , Humanos , Masculino , Transferencia de Pacientes/estadística & datos numéricos , Estudios Retrospectivos , Femenino , Persona de Mediana Edad , Anciano , Mortalidad Hospitalaria , Tiempo de Internación/estadística & datos numéricos , Adulto
6.
Trauma Surg Acute Care Open ; 9(1): e001280, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38737811

RESUMEN

Background: Tiered trauma team activation (TTA) allows systems to optimally allocate resources to an injured patient. Target undertriage and overtriage rates of <5% and <35% are difficult for centers to achieve, and performance variability exists. The objective of this study was to optimize and externally validate a previously developed hospital trauma triage prediction model to predict the need for emergent intervention in 6 hours (NEI-6), an indicator of need for a full TTA. Methods: The model was previously developed and internally validated using data from 31 US trauma centers. Data were collected prospectively at five sites using a mobile application which hosted the NEI-6 model. A weighted multiple logistic regression model was used to retrain and optimize the model using the original data set and a portion of data from one of the prospective sites. The remaining data from the five sites were designated for external validation. The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) were used to assess the validation cohort. Subanalyses were performed for age, race, and mechanism of injury. Results: 14 421 patients were included in the training data set and 2476 patients in the external validation data set across five sites. On validation, the model had an overall undertriage rate of 9.1% and overtriage rate of 53.7%, with an AUROC of 0.80 and an AUPRC of 0.63. Blunt injury had an undertriage rate of 8.8%, whereas penetrating injury had 31.2%. For those aged ≥65, the undertriage rate was 8.4%, and for Black or African American patients the undertriage rate was 7.7%. Conclusion: The optimized and externally validated NEI-6 model approaches the recommended undertriage and overtriage rates while significantly reducing variability of TTA across centers for blunt trauma patients. The model performs well for populations that traditionally have high rates of undertriage. Level of evidence: 2.

7.
Digit Health ; 10: 20552076241249925, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38708184

RESUMEN

Objective: Patients and clinicians rarely experience healthcare decisions as snapshots in time, but clinical decision support (CDS) systems often represent decisions as snapshots. This scoping review systematically maps challenges and facilitators to longitudinal CDS that are applied at two or more timepoints for the same decision made by the same patient or clinician. Methods: We searched Embase, PubMed, and Medline databases for articles describing development, validation, or implementation of patient- or clinician-facing longitudinal CDS. Validated quality assessment tools were used for article selection. Challenges and facilitators to longitudinal CDS are reported according to PRISMA-ScR guidelines. Results: Eight articles met inclusion criteria; each article described a unique CDS. None used entirely automated data entry, none used living guidelines for updating the evidence base or knowledge engine as new evidence emerged during the longitudinal study, and one included formal readiness for change assessments. Seven of eight CDS were implemented and evaluated prospectively. Challenges were primarily related to suboptimal study design (with unique challenges for each study) or user interface. Facilitators included use of randomized trial designs for prospective enrollment, increased CDS uptake during longitudinal exposure, and machine-learning applications that are tailored to the CDS use case. Conclusions: Despite the intuitive advantages of representing healthcare decisions longitudinally, peer-reviewed literature on longitudinal CDS is sparse. Existing reports suggest opportunities to incorporate longitudinal CDS frameworks, automated data entry, living guidelines, and user readiness assessments. Generating best practice guidelines for longitudinal CDS would require a greater depth and breadth of published work and expert opinion.

8.
Am Surg ; : 31348241256070, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38770751

RESUMEN

BACKGROUND: Optimization of antibiotic stewardship requires determining appropriate antibiotic treatment and duration of use. Our current method of identifying infectious complications alone does not attempt to measure the resources actually utilized to treat infections in patients. We sought to develop a method accounting for treatment of infections and length of antibiotic administration to allow benchmarking of trauma hospitals with regard to days of antibiotic use. METHODS: Using trauma quality collaborative data from 35 American College of Surgeons (ACS)-verified level I and level II trauma centers between November 1, 2020, and January 31, 2023, a two-part model was created to account for (1) the odds of any antibiotic use, using logistic regression; and (2) the duration of usage, using negative binomial distribution. We adjusted for injury severity, presence/type of infection (eg, ventilator-acquired pneumonia), infectious complications, and comorbid conditions. We performed observed-to-expected adjustments to calculate each center's risk-adjusted antibiotic days, bootstrapped Observed/Expected (O/E) ratios to create confidence intervals, and flagged potential high or low outliers as hospitals whose confidence intervals lay above or below the overall mean. RESULTS: The mean antibiotic treatment days was 1.98°days with a total of 88,403 treatment days. A wide variation existed in risk-adjusted antibiotic treatment days (.76°days to 2.69°days). Several hospitals were identified as low (9 centers) or high (6 centers) outliers. CONCLUSION: There exists a wide variation in the duration of risk-adjusted antibiotic use amongst trauma centers. Further study is needed to address the underlying cause of variation and for improved antibiotic stewardship.

10.
Clin Infect Dis ; 79(2): 354-363, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-38690892

RESUMEN

BACKGROUND: Metformin has antiviral activity against RNA viruses including severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The mechanism appears to be suppression of protein translation via targeting the host mechanistic target of rapamycin pathway. In the COVID-OUT randomized trial for outpatient coronavirus disease 2019 (COVID-19), metformin reduced the odds of hospitalizations/death through 28 days by 58%, of emergency department visits/hospitalizations/death through 14 days by 42%, and of long COVID through 10 months by 42%. METHODS: COVID-OUT was a 2 × 3 randomized, placebo-controlled, double-blind trial that assessed metformin, fluvoxamine, and ivermectin; 999 participants self-collected anterior nasal swabs on day 1 (n = 945), day 5 (n = 871), and day 10 (n = 775). Viral load was quantified using reverse-transcription quantitative polymerase chain reaction. RESULTS: The mean SARS-CoV-2 viral load was reduced 3.6-fold with metformin relative to placebo (-0.56 log10 copies/mL; 95% confidence interval [CI], -1.05 to -.06; P = .027). Those who received metformin were less likely to have a detectable viral load than placebo at day 5 or day 10 (odds ratio [OR], 0.72; 95% CI, .55 to .94). Viral rebound, defined as a higher viral load at day 10 than day 5, was less frequent with metformin (3.28%) than placebo (5.95%; OR, 0.68; 95% CI, .36 to 1.29). The metformin effect was consistent across subgroups and increased over time. Neither ivermectin nor fluvoxamine showed effect over placebo. CONCLUSIONS: In this randomized, placebo-controlled trial of outpatient treatment of SARS-CoV-2, metformin significantly reduced SARS-CoV-2 viral load, which may explain the clinical benefits in this trial. Metformin is pleiotropic with other actions that are relevant to COVID-19 pathophysiology. CLINICAL TRIALS REGISTRATION: NCT04510194.


Asunto(s)
Antivirales , Tratamiento Farmacológico de COVID-19 , COVID-19 , Metformina , SARS-CoV-2 , Carga Viral , Humanos , Metformina/uso terapéutico , Metformina/farmacología , Carga Viral/efectos de los fármacos , Masculino , SARS-CoV-2/efectos de los fármacos , Femenino , Persona de Mediana Edad , Método Doble Ciego , Antivirales/uso terapéutico , Antivirales/farmacología , Adulto , COVID-19/virología , Ivermectina/uso terapéutico , Ivermectina/farmacología , Fluvoxamina/uso terapéutico , Fluvoxamina/farmacología , Anciano
11.
J Surg Res ; 296: 209-216, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38281356

RESUMEN

INTRODUCTION: Functional decline is associated with critical illness, though this relationship in surgical patients is unclear. This study aims to characterize functional decline after intensive care unit (ICU) admission among surgical patients. METHODS: We performed a retrospective analysis of surgical patients admitted to the ICU in the Cerner Acute Physiology and Chronic Health Evaluation database, which includes 236 hospitals, from 2007 to 2017. Patients with and without functional decline were compared. Predictors of decline were modeled. RESULTS: A total of 52,838 patients were included; 19,310 (36.5%) experienced a functional decline. Median ages of the decline and nondecline groups were 69 (interquartile range 59-78) and 63 (interquartile range 52-72) years, respectively (P < 0.01). The nondecline group had a larger proportion of males (59.1% versus 55.3% in the decline group, P < 0.01). After controlling for sociodemographic covariates, comorbidities, and disease severity upon ICU admission, patients undergoing pulmonary (odds ratio [OR] 6.54, 95% confidence interval [CI] 2.67-16.02), musculoskeletal (OR 4.13, CI 3.51-4.87), neurological (OR 2.67, CI 2.39-2.98), gastrointestinal (OR 1.61, CI 1.38-1.88), and skin and soft tissue (OR 1.35, CI 1.08-1.68) compared to cardiovascular surgeries had increased odds of decline. CONCLUSIONS: More than one in three critically ill surgical patients experienced a functional decline. Pulmonary, musculoskeletal, and neurological procedures conferred the greatest risk. Additional resources should be targeted toward the rehabilitation of these patients.


Asunto(s)
Enfermedad Crítica , Unidades de Cuidados Intensivos , Masculino , Humanos , Persona de Mediana Edad , Anciano , Estudios Retrospectivos , Oportunidad Relativa , Hospitalización
12.
Surg Infect (Larchmt) ; 25(1): 56-62, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38285892

RESUMEN

Background: Trials have shown non-inferiority of non-operative management (NOM) for appendicitis, although critically ill patients have been often excluded. The purpose of this study is to evaluate surgical versus NOM outcomes in critically ill patients with appendicitis by measuring mortality and hospital length of stay (LOS). Patients and Methods: The Healthcare Cost and Utilization Project's (HCUP) Database was utilized to analyze data from 10 states between 2008 and 2015. All patients with acute appendicitis by International Classification of Diseases, Ninth Revision (ICD-9) codes over the age of 18 were included. Negative binomial and logistic regression were used to determine the association of acute renal failure (ARF), cardiovascular failure (CVF), pulmonary failure (PF), and sepsis by treatment strategy (laparoscopic, open, both, or no surgery) on mortality and hospital LOS. Results: Among 464,123 patients, 67.5%, 23.3%, 8.2%, and 0.8% underwent laparoscopic, open, NOM, or both laparoscopic and open surgery, respectively. Patients who underwent surgery had 58% lower odds of mortality and 34% shorter hospital LOS compared with NOM patients. Patients with ARF, CVF, PF, and sepsis had 102%, 383%, 475%, and 666% higher odds of mortality and a 47%, 46%, 71%, and 163% longer hospital LOS, respectively, compared with patients without these diagnoses on admission. Conclusions: Critical illness on admission increases mortality and hospital LOS. Patients who underwent laparoscopic, and to a lesser extent, open appendectomy had improved mortality compared with those who did not undergo surgery regardless of critical illness status.


Asunto(s)
Apendicitis , Laparoscopía , Sepsis , Humanos , Adulto , Persona de Mediana Edad , Enfermedad Crítica , Apendicitis/cirugía , Apendicitis/diagnóstico , Tiempo de Internación , Enfermedad Aguda , Apendicectomía/efectos adversos , Sepsis/etiología , Estudios Retrospectivos , Resultado del Tratamiento
13.
Stud Health Technol Inform ; 310: 860-864, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269931

RESUMEN

Post-acute sequelae of SARS CoV-2 (PASC) are a group of conditions in which patients previously infected with COVID-19 experience symptoms weeks/months post-infection. PASC has substantial societal burden, including increased healthcare costs and disabilities. This study presents a natural language processing (NLP) based pipeline for identification of PASC symptoms and demonstrates its ability to estimate the proportion of suspected PASC cases. A manual case review to obtain this estimate indicated our sample incidence of PASC (13%) was representative of the estimated population proportion (95% CI: 19±6.22%). However, the high number of cases classified as indeterminate demonstrates the challenges in classifying PASC even among experienced clinicians. Lastly, this study developed a dashboard to display views of aggregated PASC symptoms and measured its utility using the System Usability Scale. Overall comments related to the dashboard's potential were positive. This pipeline is crucial for monitoring post-COVID-19 patients with potential for use in clinical settings.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Procesamiento de Lenguaje Natural , SARS-CoV-2 , Progresión de la Enfermedad , Costos de la Atención en Salud
14.
Sci Rep ; 13(1): 20315, 2023 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-37985892

RESUMEN

Significant progress has been made in preventing severe COVID-19 disease through the development of vaccines. However, we still lack a validated baseline predictive biologic signature for the development of more severe disease in both outpatients and inpatients infected with SARS-CoV-2. The objective of this study was to develop and externally validate, via 5 international outpatient and inpatient trials and/or prospective cohort studies, a novel baseline proteomic signature, which predicts the development of moderate or severe (vs mild) disease in patients with COVID-19 from a proteomic analysis of 7000 + proteins. The secondary objective was exploratory, to identify (1) individual baseline protein levels and/or (2) protein level changes within the first 2 weeks of acute infection that are associated with the development of moderate/severe (vs mild) disease. For model development, samples collected from 2 randomized controlled trials were used. Plasma was isolated and the SomaLogic SomaScan platform was used to characterize protein levels for 7301 proteins of interest for all studies. We dichotomized 113 patients as having mild or moderate/severe COVID-19 disease. An elastic net approach was used to develop a predictive proteomic signature. For validation, we applied our signature to data from three independent prospective biomarker studies. We found 4110 proteins measured at baseline that significantly differed between patients with mild COVID-19 and those with moderate/severe COVID-19 after adjusting for multiple hypothesis testing. Baseline protein expression was associated with predicted disease severity with an error rate of 4.7% (AUC = 0.964). We also found that five proteins (Afamin, I-309, NKG2A, PRS57, LIPK) and patient age serve as a signature that separates patients with mild COVID-19 and patients with moderate/severe COVID-19 with an error rate of 1.77% (AUC = 0.9804). This panel was validated using data from 3 external studies with AUCs of 0.764 (Harvard University), 0.696 (University of Colorado), and 0.893 (Karolinska Institutet). In this study we developed and externally validated a baseline COVID-19 proteomic signature associated with disease severity for potential use in both outpatients and inpatients with COVID-19.


Asunto(s)
COVID-19 , Humanos , Estudios Prospectivos , SARS-CoV-2 , Proteómica , Biomarcadores
15.
Metabolites ; 13(11)2023 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-37999202

RESUMEN

Metabolic disease is a significant risk factor for severe COVID-19 infection, but the contributing pathways are not yet fully elucidated. Using data from two randomized controlled trials across 13 U.S. academic centers, our goal was to characterize metabolic features that predict severe COVID-19 and define a novel baseline metabolomic signature. Individuals (n = 133) were dichotomized as having mild or moderate/severe COVID-19 disease based on the WHO ordinal scale. Blood samples were analyzed using the Biocrates platform, providing 630 targeted metabolites for analysis. Resampling techniques and machine learning models were used to determine metabolomic features associated with severe disease. Ingenuity Pathway Analysis (IPA) was used for functional enrichment analysis. To aid in clinical decision making, we created baseline metabolomics signatures of low-correlated molecules. Multivariable logistic regression models were fit to associate these signatures with severe disease on training data. A three-metabolite signature, lysophosphatidylcholine a C17:0, dihydroceramide (d18:0/24:1), and triacylglyceride (20:4_36:4), resulted in the best discrimination performance with an average test AUROC of 0.978 and F1 score of 0.942. Pathways related to amino acids were significantly enriched from the IPA analyses, and the mitogen-activated protein kinase kinase 5 (MAP2K5) was differentially activated between groups. In conclusion, metabolites related to lipid metabolism efficiently discriminated between mild vs. moderate/severe disease. SDMA and GABA demonstrated the potential to discriminate between these two groups as well. The mitogen-activated protein kinase kinase 5 (MAP2K5) regulator is differentially activated between groups, suggesting further investigation as a potential therapeutic pathway.

16.
J Clin Transl Sci ; 7(1): e242, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38033705

RESUMEN

The COVID-19 pandemic accelerated the development of decentralized clinical trials (DCT). DCT's are an important and pragmatic method for assessing health outcomes yet comprise only a minority of clinical trials, and few published methodologies exist. In this report, we detail the operational components of COVID-OUT, a decentralized, multicenter, quadruple-blinded, randomized trial that rapidly delivered study drugs nation-wide. The trial examined three medications (metformin, ivermectin, and fluvoxamine) as outpatient treatment of SARS-CoV-2 for their effectiveness in preventing severe or long COVID-19. Decentralized strategies included HIPAA-compliant electronic screening and consenting, prepacking investigational product to accelerate delivery after randomization, and remotely confirming participant-reported outcomes. Of the 1417 individuals with the intention-to-treat sample, the remote nature of the study caused an additional 94 participants to not take any doses of study drug. Therefore, 1323 participants were in the modified intention-to-treat sample, which was the a priori primary study sample. Only 1.4% of participants were lost to follow-up. Decentralized strategies facilitated the successful completion of the COVID-OUT trial without any in-person contact by expediting intervention delivery, expanding trial access geographically, limiting contagion exposure, and making it easy for participants to complete follow-up visits. Remotely completed consent and follow-up facilitated enrollment.

17.
Learn Health Syst ; 7(4): e10368, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37860063

RESUMEN

Inputs and Outputs: The Strike-a-Match Function, written in JavaScript version ES6+, accepts the input of two datasets (one dataset defining eligibility criteria for research studies or clinical decision support, and one dataset defining characteristics for an individual patient). It returns an output signaling whether the patient characteristics are a match for the eligibility criteria. Purpose: Ultimately, such a system will play a "matchmaker" role in facilitating point-of-care recognition of patient-specific clinical decision support. Specifications: The eligibility criteria are defined in HL7 FHIR (version R5) EvidenceVariable Resource JSON structure. The patient characteristics are provided in an FHIR Bundle Resource JSON including one Patient Resource and one or more Observation and Condition Resources which could be obtained from the patient's electronic health record. Application: The Strike-a-Match Function determines whether or not the patient is a match to the eligibility criteria and an Eligibility Criteria Matching Software Demonstration interface provides a human-readable display of matching results by criteria for the clinician or patient to consider. This is the first software application, serving as proof of principle, that compares patient characteristics and eligibility criteria with all data exchanged using HL7 FHIR JSON. An Eligibility Criteria Matching Software Library at https://fevir.net/110192 provides a method for sharing functions using the same information model.

18.
Ann Surg Open ; 4(3): e324, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37746607

RESUMEN

Background: Beta-adrenergic receptor blocker (BB) administration has been shown to improve survival after traumatic brain injury (TBI). However, studies to date that observe a benefit did not distinguish between continuation of preinjury BB versus de novo initiation of BB. Objectives: To determine the effect of continuation of preinjury BB and de novo initiation of BB on risk-adjusted mortality and complications for patients with TBI. Methods: Trauma quality collaborative data (2016-2021) were analyzed. Patients were excluded with hospitalization <48 hours, direct admission, or penetrating injury. Severe TBI was identified as a head abbreviated injury scale (AIS) value of 3 to 5. Patients were placed into 4 groups based on the preinjury BB use and administration of BB during hospitalization. Propensity score matching was used to create 1:1 matched cohorts of patients for comparisons. Odd ratios of mortality accounting for hospital clustering were calculated. A sensitivity analysis was performed excluding patients with AIS >2 injuries in all other body regions to create a cohort of isolated TBI patients. Results: A total of 15,153 patients treated at 35 trauma centers were available for analysis. Patients were divided into 4 cohort groupings related to preinjury BB use and postinjury receipt of BB. The odds of mortality was significantly reduced for patients with a TBI on a preinjury BB who had the medication continued in the acute setting (as compared with patients on preinjury BB who did not) (odds ratio [OR], 0.73; 95% confidence interval [CI], 0.54-0.98; P = 0.04). Patients with a TBI who were not on preinjury BB did not benefit from de novo initiation of BB with regard to mortality (OR, 0.83; 95% CI, 0.64-1.08; P = 0.2). In the sensitivity analysis, excluding polytrauma patients, patients on preinjury BB who had BB continued had a reduction in mortality when compared with patients in which BB was stopped following a TBI (OR, 0.65; 95% CI, 0.47-0.91; P = 0.01). Conclusions: Continuing BB is associated with reduced odds of mortality in patients with a TBI on preinjury BB. We were unable to demonstrate benefit from instituting beta blockade in patients who are not on a BB preinjury.

19.
World J Surg ; 47(11): 2668-2675, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37524957

RESUMEN

BACKGROUND: Arrhythmias are common in critically ill patients, though the impact of arrhythmias on surgical patients is not well delineated. We aimed to characterize mortality following arrhythmias in critically ill patients. METHODS: We performed a propensity-matched retrospective analysis of intensive care unit (ICU) patients from 2007 to 2017 in the Cerner Acute Physiology and Chronic Health Evaluation database. We compared outcomes between patients with and without arrhythmias and those with and without surgical indications for ICU admission. We also modeled predictors of arrhythmias in surgical patients. RESULTS: 467,951 patients were included; 97,958 (20.9%) were surgical patients. Arrhythmias occurred in 1.4% of the study cohorts. Predictors of arrhythmias in surgical patients included a history of cardiovascular disease (odds ratio [OR] 1.35, 95% confidence interval [CI95] 1.11-1.63), respiratory failure (OR 1.48, CI95 1.12-1.96), pneumonia (OR 3.17, CI95 1.98-5.10), higher bicarbonate level (OR 1.03, CI95 1.01-1.05), lower albumin level (OR 0.79, CI95 0.68-0.91), and vasopressor requirement (OR 27.2, CI95 22.0-33.7). After propensity matching, surgical patients with arrhythmias had a 42% mortality risk reduction compared to non-surgical patients (risk ratio [RR] 0.58, CI 95 0.43-0.79). Predicted probabilities of mortality for surgical patients were lower at all ages. CONCLUSIONS: Surgical patients with arrhythmias are at lower risk of mortality than non-surgical patients. In this propensity-matched analysis, predictors of arrhythmias in critically ill surgical patients included a history of cardiovascular disease, respiratory complications, increased bicarbonate levels, decreased albumin levels, and vasopressor requirement. These findings highlight the differential effect of arrhythmias on different cohorts of critically ill populations.


Asunto(s)
Enfermedades Cardiovasculares , Enfermedad Crítica , Humanos , Estudios Retrospectivos , Bicarbonatos , Unidades de Cuidados Intensivos , Arritmias Cardíacas/etiología , Vasoconstrictores , Albúminas
20.
JAMA Netw Open ; 6(7): e2324176, 2023 07 03.
Artículo en Inglés | MEDLINE | ID: mdl-37486632

RESUMEN

Importance: The Deterioration Index (DTI), used by hospitals for predicting patient deterioration, has not been extensively validated externally, raising concerns about performance and equitable predictions. Objective: To locally validate DTI performance and assess its potential for bias in predicting patient clinical deterioration. Design, Setting, and Participants: This retrospective prognostic study included 13 737 patients admitted to 8 heterogenous Midwestern US hospitals varying in size and type, including academic, community, urban, and rural hospitals. Patients were 18 years or older and admitted between January 1 and May 31, 2021. Exposure: DTI predictions made every 15 minutes. Main Outcomes and Measures: Deterioration, defined as the occurrence of any of the following while hospitalized: mechanical ventilation, intensive care unit transfer, or death. Performance of the DTI was evaluated using area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Bias measures were calculated across demographic subgroups. Results: A total of 5 143 513 DTI predictions were made for 13 737 patients across 14 834 hospitalizations. Among 13 918 encounters, the mean (SD) age of patients was 60.3 (19.2) years; 7636 (54.9%) were female, 11 345 (81.5%) were White, and 12 392 (89.0%) were of other ethnicity than Hispanic or Latino. The prevalence of deterioration was 10.3% (n = 1436). The DTI produced AUROCs of 0.759 (95% CI, 0.756-0.762) at the observation level and 0.685 (95% CI, 0.671-0.700) at the encounter level. Corresponding AUPRCs were 0.039 (95% CI, 0.037-0.040) at the observation level and 0.248 (95% CI, 0.227-0.273) at the encounter level. Bias measures varied across demographic subgroups and were 14.0% worse for patients identifying as American Indian or Alaska Native and 19.0% worse for those who chose not to disclose their ethnicity. Conclusions and Relevance: In this prognostic study, the DTI had modest ability to predict patient deterioration, with varying degrees of performance at the observation and encounter levels and across different demographic groups. Disparate performance across subgroups suggests the need for more transparency in model training data and reinforces the need to locally validate externally developed prediction models.


Asunto(s)
Etnicidad , Hospitalización , Humanos , Adulto , Femenino , Persona de Mediana Edad , Masculino , Estudios Retrospectivos , Pronóstico , Hospitales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA