RESUMEN
Background: Central venous lines (CVLs) are frequently utilized in critically ill patients and confer a risk of central line-associated bloodstream infections (CLABSIs). CLABSIs are associated with increased mortality, extended hospitalization, and increased costs. Unnecessary CVL utilization contributes to CLABSIs. This initiative sought to implement a clinical decision support system (CDSS) within an electronic health record (EHR) to quantify the prevalence of potentially unnecessary CVLs and improve their timely removal in six adult intensive care units (ICUs). Methods: Intervention components included: (1) evaluating existing CDSS' effectiveness, (2) clinician education, (3) developing/implementing an EHR-based CDSS to identify potentially unnecessary CVLs, (4) audit/feedback, and (5) reviewing EHR/institutional data to compare rates of removal of potentially unnecessary CVLs, device utilization, and CLABSIs pre- and postimplementation. Data was evaluated with statistical process control charts, chi-square analyses, and incidence rate ratios. Results: Preimplementation, 25.2% of CVLs were potentially removable, and the mean weekly proportion of these CVLs that were removed within 24 hours was 20.0%. Postimplementation, a greater proportion of potentially unnecessary CVLs were removed (29%, p < 0.0001), CVL utilization decreased, and days between CLABSIs increased. The intervention was most effective in ICUs staffed by pulmonary/critical care physicians, who received monthly audit/feedback, where timely CVL removal increased from a mean of 18.0% to 30.5% (p < 0.0001) and days between CLABSIs increased from 17.3 to 25.7. Conclusions: A significant proportion of active CVLs were potentially unnecessary. CDSS implementation, in conjunction with audit and feedback, correlated with a sustained increase in timely CVL removal and an increase in days between CLABSIs.
RESUMEN
Introduction: Health care workers (HCWs) are at heightened risk of adverse mental health events (AMHEs) and burnout with resultant impact on health care staffing, outcomes, and costs. We piloted a telehealth-enabled mental health screening and support platform among HCWs in the intensive care unit (ICU) setting at a tertiary care center. Methods: A survey consisting of validated screening tools was electronically disseminated to a potential cohort of 178 ICU HCWs. Participants were given real-time feedback on their results and those at risk were provided invitations to meet with resiliency clinicians. Participants were further invited to engage in a 3-month longitudinal assessment of their well-being through repeat surveys and a weekly text-based check-in coupled with self-help tips. Programmatic engagement was evaluated and associations between at-risk scores and engagement were assessed. Qualitative input regarding programmatic uptake and acceptance was gathered through key informant interviews. Results: Fifty (28%) HCWs participated in the program. Half of the participants identified as female, and most participants were white (74%) and under the age of 50 years (93%). Nurses (38%), physicians-in-training (24%), and faculty-level physicians (20%) engaged most frequently. There were 19 (38%) requests for an appointment with a resiliency clinician. The incidence of clinically significant symptoms of AMHEs and burnout was high but not clearly associated with engagement. Additional programmatic tailoring was encouraged by key informants while time was identified as a barrier to program engagement. Discussion: A telehealth-enabled platform is a feasible approach to screening at-risk HCWs for AMHEs and can facilitate engagement with support services.
RESUMEN
BACKGROUND: Recruitment and retention of Pulmonary and Critical Care Medicine (PCCM) trainees into academic research positions remain difficult. Factors influencing graduates, like salary and personal circumstances, remain unchangeable. However, some program-level factors, like research skill acquisition and mentorship, may be modifiable to encourage matriculation into academic research positions. OBJECTIVE: We aim to identify proficiency in research-specific skills in PCCM trainees and barriers to careers as research-focused academic faculty. METHODS: We surveyed PCCM fellows in a nationwide cross-sectional analysis including demographics, research intent, research skills self-assessment, and academic career barriers. The Association of Pulmonary and Critical Care Medicine Program Directors approved and disseminated the survey. Data were collected and stored using the REDCap database. Descriptive statistics were used to assess survey items. RESULTS: 612 fellows received the primary survey with 112 completing the survey for a response rate of 18.3%. A majority were male (56.2%) and training at university-based medical centers (89.2%). Early fellowship trainees (first-/second-year fellows) comprised 66.9% of respondents with 33.1% being late fellowship trainees (third-/fourth-year fellows). Most early trainees (63.2%) indicated they intended to incorporate research into their careers. A chi-square testing of independence was performed to examine the relationship between training level and perceived proficiency. Significant relationships in perceived proficiency were identified between early and late fellowship trainees with an absolute difference of 25.3% (manuscript writing), 18.7% (grant writing), 21.6% (study design), and 19.5% (quantitative/qualitative methodology). The most prevalent barriers were unfamiliarity with grant writing (59.5%) and research funding uncertainty (56.8%). CONCLUSION: With an ongoing need for academic research faculty, this study identifies self-perceived gaps in research skills including grant writing, data analytics, and study conception and design. These skills map to fellow-identified barriers to careers in academics. Mentorship and innovative curriculum focusing on the development of key research skills may enhance academic research faculty recruitment.
RESUMEN
Objective.The ability to synchronize continuous electroencephalogram (cEEG) signals with physiological waveforms such as electrocardiogram (ECG), invasive pressures, photoplethysmography and other signals can provide meaningful insights regarding coupling between brain activity and other physiological subsystems. Aligning these datasets is a particularly challenging problem because device clocks handle time differently and synchronization protocols may be undocumented or proprietary.Approach.We used an ensemble-based model to detect the timestamps of heartbeat artefacts from ECG waveforms recorded from inpatient bedside monitors and from cEEG signals acquired using a different device. Vectors of inter-beat intervals were matched between both datasets and robust linear regression was applied to measure the relative time offset between the two datasets as a function of time.Main Results.The timing error between the two unsynchronized datasets ranged between -84 s and +33 s (mean 0.77 s, median 4.31 s, IQR25-4.79 s, IQR75 11.38s). Application of our method improved the relative alignment to within ± 5ms for more than 61% of the dataset. The mean clock drift between the two datasets was 418.3 parts per million (ppm) (median 414.6 ppm, IQR25 411.0 ppm, IQR75 425.6 ppm). A signal quality index was generated that described the quality of alignment for each cEEG study as a function of time.Significance.We developed and tested a method to retrospectively time-align two clinical waveform datasets acquired from different devices using a common signal. The method was applied to 33,911h of signals collected in a paediatric critical care unit over six years, demonstrating that the method can be applied to long-term recordings collected under clinical conditions. The method can account for unknown clock drift rates and the presence of discontinuities caused by clock resynchronization events.
Asunto(s)
Electrocardiografía , Unidades de Cuidados Intensivos , Niño , Humanos , Estudios Retrospectivos , Electrocardiografía/métodos , Presión Sanguínea/fisiología , ElectroencefalografíaRESUMEN
BACKGROUND: There is a clinical need for therapeutics for COVID-19 patients with acute hypoxemic respiratory failure whose 60-day mortality remains at 30-50%. Aviptadil, a lung-protective neuropeptide, and remdesivir, a nucleotide prodrug of an adenosine analog, were compared with placebo among patients with COVID-19 acute hypoxaemic respiratory failure. METHODS: TESICO was a randomised trial of aviptadil and remdesivir versus placebo at 28 sites in the USA. Hospitalised adult patients were eligible for the study if they had acute hypoxaemic respiratory failure due to confirmed SARS-CoV-2 infection and were within 4 days of the onset of respiratory failure. Participants could be randomly assigned to both study treatments in a 2 × 2 factorial design or to just one of the agents. Participants were randomly assigned with a web-based application. For each site, randomisation was stratified by disease severity (high-flow nasal oxygen or non-invasive ventilation vs invasive mechanical ventilation or extracorporeal membrane oxygenation [ECMO]), and four strata were defined by remdesivir and aviptadil eligibility, as follows: (1) eligible for randomisation to aviptadil and remdesivir in the 2 × 2 factorial design; participants were equally randomly assigned (1:1:1:1) to intravenous aviptadil plus remdesivir, aviptadil plus remdesivir matched placebo, aviptadil matched placebo plus remdesvir, or aviptadil placebo plus remdesivir placebo; (2) eligible for randomisation to aviptadil only because remdesivir was started before randomisation; (3) eligible for randomisation to aviptadil only because remdesivir was contraindicated; and (4) eligible for randomisation to remdesivir only because aviptadil was contraindicated. For participants in strata 2-4, randomisation was 1:1 to the active agent or matched placebo. Aviptadil was administered as a daily 12-h infusion for 3 days, targeting 600 pmol/kg on infusion day 1, 1200 pmol/kg on day 2, and 1800 pmol/kg on day 3. Remdesivir was administered as a 200 mg loading dose, followed by 100 mg daily maintenance doses for up to a 10-day total course. For participants assigned to placebo for either agent, matched saline placebo was administered in identical volumes. For both treatment comparisons, the primary outcome, assessed at day 90, was a six-category ordinal outcome: (1) at home (defined as the type of residence before hospitalisation) and off oxygen (recovered) for at least 77 days, (2) at home and off oxygen for 49-76 days, (3) at home and off oxygen for 1-48 days, (4) not hospitalised but either on supplemental oxygen or not at home, (5) hospitalised or in hospice care, or (6) dead. Mortality up to day 90 was a key secondary outcome. The independent data and safety monitoring board recommended stopping the aviptadil trial on May 25, 2022, for futility. On June 9, 2022, the sponsor stopped the trial of remdesivir due to slow enrolment. The trial is registered with ClinicalTrials.gov, NCT04843761. FINDINGS: Between April 21, 2021, and May 24, 2022, we enrolled 473 participants in the study. For the aviptadil comparison, 471 participants were randomly assigned to aviptadil or matched placebo. The modified intention-to-treat population comprised 461 participants who received at least a partial infusion of aviptadil (231 participants) or aviptadil matched placebo (230 participants). For the remdesivir comparison, 87 participants were randomly assigned to remdesivir or matched placebo and all received some infusion of remdesivir (44 participants) or remdesivir matched placebo (43 participants). 85 participants were included in the modified intention-to-treat analyses for both agents (ie, those enrolled in the 2 x 2 factorial). For the aviptadil versus placebo comparison, the median age was 57 years (IQR 46-66), 178 (39%) of 461 participants were female, and 246 (53%) were Black, Hispanic, Asian or other (vs 215 [47%] White participants). 431 (94%) of 461 participants were in an intensive care unit at baseline, with 271 (59%) receiving high-flow nasal oxygen or non-invasive ventiliation, 185 (40%) receiving invasive mechanical ventilation, and five (1%) receiving ECMO. The odds ratio (OR) for being in a better category of the primary efficacy endpoint for aviptadil versus placebo at day 90, from a model stratified by baseline disease severity, was 1·11 (95% CI 0·80-1·55; p=0·54). Up to day 90, 86 participants in the aviptadil group and 83 in the placebo group died. The cumulative percentage who died up to day 90 was 38% in the aviptadil group and 36% in the placebo group (hazard ratio 1·04, 95% CI 0·77-1·41; p=0·78). The primary safety outcome of death, serious adverse events, organ failure, serious infection, or grade 3 or 4 adverse events up to day 5 occurred in 146 (63%) of 231 patients in the aviptadil group compared with 129 (56%) of 230 participants in the placebo group (OR 1·40, 95% CI 0·94-2·08; p=0·10). INTERPRETATION: Among patients with COVID-19-associated acute hypoxaemic respiratory failure, aviptadil did not significantly improve clinical outcomes up to day 90 when compared with placebo. The smaller than planned sample size for the remdesivir trial did not permit definitive conclusions regarding safety or efficacy. FUNDING: National Institutes of Health.
Asunto(s)
COVID-19 , Insuficiencia Respiratoria , Adulto , Humanos , Femenino , Persona de Mediana Edad , Masculino , COVID-19/complicaciones , SARS-CoV-2 , Resultado del Tratamiento , Tratamiento Farmacológico de COVID-19 , Insuficiencia Respiratoria/tratamiento farmacológico , Insuficiencia Respiratoria/etiología , OxígenoRESUMEN
Introduction: Sepsis is associated with endothelial cell (EC) dysfunction, increased vascular permeability and organ injury, which may lead to mortality, acute respiratory distress syndrome (ARDS) and acute renal failure (ARF). There are no reliable biomarkers to predict these sepsis complications at present. Recent evidence suggests that circulating extracellular vesicles (EVs) and their content caspase-1 and miR-126 may play a critical role in modulating vascular injury in sepsis; however, the association between circulating EVs and sepsis outcomes remains largely unknown. Methods: We obtained plasma samples from septic patients (n=96) within 24 hours of hospital admission and from healthy controls (n=45). Total, monocyte- or EC-derived EVs were isolated from the plasma samples. Transendothelial electrical resistance (TEER) was used as an indicator of EC dysfunction. Caspase-1 activity in EVs was detected and their association with sepsis outcomes including mortality, ARDS and ARF was analyzed. In another set of experiments, total EVs were isolated from plasma samples of 12 septic patients and 12 non-septic critical illness controls on days 1, and 3 after hospital admission. RNAs were isolated from these EVs and Next-generation sequencing was performed. The association between miR-126 levels and sepsis outcomes such as mortality, ARDS and ARF was analyzed. Results: Septic patients with circulating EVs that induced EC injury (lower transendothelial electrical resistance) were more likely to experience ARDS (p<0.05). Higher caspase-1 activity in total EVs, monocyte- or EC-derived EVs was significantly associated with the development of ARDS (p<0.05). MiR-126-3p levels in EC EVs were significantly decreased in ARDS patients compared with healthy controls (p<0.05). Moreover, a decline in miR-126-5p levels from day 1 to day 3 was associated with increased mortality, ARDS and ARF; while decline in miR-126-3p levels from day 1 to day 3 was associated with ARDS development. Conclusions: Enhanced caspase-1 activity and declining miR-126 levels in circulating EVs are associated with sepsis-related organ failure and mortality. Extracellular vesicular contents may serve as novel prognostic biomarkers and/or targets for future therapeutic approaches in sepsis.
Asunto(s)
Vesículas Extracelulares , MicroARNs , Síndrome de Dificultad Respiratoria , Sepsis , Humanos , MicroARNs/genética , Sepsis/complicaciones , Biomarcadores , Síndrome de Dificultad Respiratoria/etiología , CaspasasRESUMEN
Emerging evidence suggests the potential importance of inspiratory driving pressure (DP) and respiratory system elastance (ERS) on outcomes among patients with the acute respiratory distress syndrome. Their association with outcomes among heterogeneous populations outside of a controlled clinical trial is underexplored. We used electronic health record (EHR) data to characterize the associations of DP and ERS with clinical outcomes in a real-world heterogenous population. DESIGN: Observational cohort study. SETTING: Fourteen ICUs in two quaternary academic medical centers. PATIENTS: Adult patients who received mechanical ventilation for more than 48 hours and less than 30 days. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: EHR data from 4,233 ventilated patients from 2016 to 2018 were extracted, harmonized, and merged. A minority of the analytic cohort (37%) experienced a Pao2/Fio2 of less than 300. A time-weighted mean exposure was calculated for ventilatory variables including tidal volume (VT), plateau pressures (PPLAT), DP, and ERS. Lung-protective ventilation adherence was high (94% with VT < 8.5 mL/kg, time-weighted mean VT = 6. 8 mL/kg, 88% with PPLAT ≤ 30 cm H2O). Although time-weighted mean DP (12.2 cm H2O) and ERS (1.9 cm H2O/[mL/kg]) were modest, 29% and 39% of the cohort experienced a DP greater than 15 cm H2O or an ERS greater than 2 cm H2O/(mL/kg), respectively. Regression modeling with adjustment for relevant covariates determined that exposure to time-weighted mean DP (> 15 cm H2O) was associated with increased adjusted risk of mortality and reduced adjusted ventilator-free days independent of adherence to lung-protective ventilation. Similarly, exposure to time-weighted mean ERS greater than 2 cm H2O/(mL/kg) was associated with increased adjusted risk of mortality. CONCLUSIONS: Elevated DP and ERS are associated with increased risk of mortality among ventilated patients independent of severity of illness or oxygenation impairment. EHR data can enable assessment of time-weighted ventilator variables and their association with clinical outcomes in a multicenter real-world setting.
RESUMEN
Existing recommendations for mechanical ventilation are based on studies that under-sampled or excluded obese and severely obese individuals. Objective: To determine if driving pressure (DP) and total respiratory system elastance (Ers) differ among normal/overweight (body mass index [BMI] < 30 kg/m2), obese, and severely obese ventilator-dependent respiratory failure (VDRF) patients and if there any associations with clinical outcomes. Design Setting and Participants: Retrospective observational cohort study during 2016-2018 at two tertiary care academic medical centers using electronic health record data from the first 2 full days of mechanical ventilation. The cohort was stratified by BMI classes to measure median DP, time-weighted mean tidal volume, plateau pressure, and Ers for each BMI class. Setting and Participants: Mechanically ventilated patients in medical and surgical ICUs. Main Outcomes and Measures: Primary outcome and effect measures included relative risk of in-hospital mortality, ventilator-free days, ICU length of stay, and hospital length of stay with multivariable adjustment. Results: The cohort included 3,204 patients with 976 (30.4%) and 382 (11.9%) obese and severely obese patients, respectively. Severe obesity was associated with a DP greater than or equal to 15 cm H2O (relative risk [RR], 1.51 [95% CI, 1.26-1.82]) and Ers greater than or equal to 2 cm H2O/(mL/kg) (RR, 1.31 [95% CI, 1.14-1.49]). Despite elevated DP and Ers, there were no differences in in-hospital mortality, ventilator-free days, or ICU length of stay among all three groups. Conclusions and Relevance: Despite higher DP and ERS among obese and severely obese VDRF patients, there were no differences in in-hospital mortality or duration of mechanical ventilation, suggesting that DP has less prognostic value in obese and severely obese VDRF patients.
RESUMEN
A firm concept of time is essential for establishing causality in a clinical setting. Review of critical incidents and generation of study hypotheses require a robust understanding of the sequence of events but conducting such work can be problematic when timestamps are recorded by independent and unsynchronized clocks. Most clinical models implicitly assume that timestamps have been measured accurately and precisely, but this custom will need to be re-evaluated if our algorithms and models are to make meaningful use of higher frequency physiological data sources. In this narrative review we explore factors that can result in timestamps being erroneously recorded in a clinical setting, with particular focus on systems that may be present in a critical care unit. We discuss how clocks, medical devices, data storage systems, algorithmic effects, human factors, and other external systems may affect the accuracy and precision of recorded timestamps. The concept of temporal uncertainty is introduced, and a holistic approach to timing accuracy, precision, and uncertainty is proposed. This quantitative approach to modeling temporal uncertainty provides a basis to achieve enhanced model generalizability and improved analytical outcomes.
RESUMEN
Background and Objectives: Machine Learning offers opportunities to improve patient outcomes, team performance, and reduce healthcare costs. Yet only a small fraction of all Machine Learning models for health care have been successfully integrated into the clinical space. There are no current guidelines for clinical model integration, leading to waste, unnecessary costs, patient harm, and decreases in efficiency when improperly implemented. Systems engineering is widely used in industry to achieve an integrated system of systems through an interprofessional collaborative approach to system design, development, and integration. We propose a framework based on systems engineering to guide the development and integration of Machine Learning models in healthcare. Methods: Applied systems engineering, software engineering and health care Machine Learning software development practices were reviewed and critically appraised to establish an understanding of limitations and challenges within these domains. Principles of systems engineering were used to develop solutions to address the identified problems. The framework was then harmonized with the Machine Learning software development process to create a systems engineering-based Machine Learning software development approach in the healthcare domain. Results: We present an integration framework for healthcare Artificial Intelligence that considers the entirety of this system of systems. Our proposed framework utilizes a combined software and integration engineering approach and consists of four phases: (1) Inception, (2) Preparation, (3) Development, and (4) Integration. During each phase, we present specific elements for consideration in each of the three domains of integration: The Human, The Technical System, and The Environment. There are also elements that are considered in the interactions between these domains. Conclusion: Clinical models are technical systems that need to be integrated into the existing system of systems in health care. A systems engineering approach to integration ensures appropriate elements are considered at each stage of model design to facilitate model integration. Our proposed framework is based on principles of systems engineering and can serve as a guide for model development, increasing the likelihood of successful Machine Learning translation and integration.
RESUMEN
BACKGROUND: Recent studies suggest that balanced fluids improve inpatient outcomes compared to normal saline. The objective of this study was to obtain insights into clinicians' knowledge, attitudes and perceived prescribing practices concerning IV isotonic fluids and to analyze perceived prescribing in the context of actual prescribing. METHODS: This study, conducted at a single center (Medical University of South Carolina), included 1) a cross-sectional survey of physicians and advanced practice providers (APPs) (7/2019-8/2019) and 2) review electronic health record (EHR) claims data (2/2018-1/2019) to quantify the prescribing patterns of isotonic fluids. RESULTS: Clinicians perceived ordering equivalent amounts of normal saline and balanced fluids although normal saline ordering predominated (59.7%). There was significant variation in perceived and actual ordering across specialties, with internal medicine/subspecialty and emergency medicine clinicians reporting preferential use of normal saline and surgical/subspecialty and anesthesia clinicians reporting preferential use of balanced fluids (p < 0.0001). Clinicians who self-reported providing care in an intensive care unit (ICU) reported more frequent use of balanced fluids than non-ICU clinicians (p = 0.03). Actual prescribing data mirrored these differences. Clinicians' self-reported use of continuous infusions (p = 0.0006) and beliefs regarding the volume of fluid required to cause harm (p = 0.003) were also associated with self-reported differences in fluid prescribing. Clinician experience, most clinical considerations (e.g., indications, contraindications, barriers to using a specific fluid), and fluid cost were not associated with differential prescribing. CONCLUSIONS: Persistent normal saline utilization is associated with certain specialties, care locations, and the rate and volume of fluid administered, but not with other clinical considerations or cost. These findings can guide interventions to improve evidence-based fluid prescribing.
Asunto(s)
Prescripción Electrónica , Médicos , Actitud del Personal de Salud , Estudios Transversales , Humanos , Pautas de la Práctica en Medicina , Solución SalinaRESUMEN
Approximately one in 30 patients with acute respiratory failure (ARF) undergoes an inter-ICU transfer. Our objectives are to describe inter-ICU transfer patterns and evaluate the impact of timing of transfer on patient-centered outcomes. DESIGN: Retrospective, quasi-experimental study. SETTING: We used the Healthcare Cost and Utilization Project State Inpatient Databases in five states (Florida, Maryland, Mississippi, New York, and Washington) during 2015-2017. PARTICIPANTS: We selected patients with International Classification of Diseases, 9th and 10th Revision codes of respiratory failure and mechanical ventilation who underwent an inter-ICU transfer (n = 6,718), grouping as early (≤ 2 d) and later transfers (3+ d). To control for potential selection bias, we propensity score matched patients (1:1) to model propensity for early transfer using a priori defined patient demographic, clinical, and hospital variables. MAIN OUTCOMES: Inhospital mortality, hospital length of stay (HLOS), and cumulative charges related to inter-ICU transfer. RESULTS: Six-thousand seven-hundred eighteen patients with ARF underwent inter-ICU transfer, 68% of whom (n = 4,552) were transferred early (≤ 2 d). Propensity score matching yielded 3,774 well-matched patients for this study. Unadjusted outcomes were all superior in the early versus later transfer cohort: inhospital mortality (24.4% vs 36.1%; p < 0.0001), length of stay (8 vs 22 d; p < 0.0001), and cumulative charges ($118,686 vs $308,977; p < 0.0001). Through doubly robust multivariable modeling with random effects at the state level, we found patients who were transferred early had a 55.8% reduction in risk of inhospital mortality than those whose transfer was later (relative risk, 0.442; 95% CI, 0.403-0.497). Additionally, the early transfer cohort had lower HLOS (20.7 fewer days [13.0 vs 33.7; p < 0.0001]), and lower cumulative charges ($66,201 less [$192,182 vs $258,383; p < 0.0001]). CONCLUSIONS AND RELEVANCE: Our study is the first to use a large, multistate sample to evaluate the practice of inter-ICU transfers in ARF and also define early and later transfers. Our findings of favorable outcomes with early transfer are vital in designing future prospective studies evaluating evidence-based transfer procedures and policies.
RESUMEN
Sepsis-associated encephalopathy (SAE) is characterized by acute and diffuse brain dysfunction and correlates with long-term cognitive impairments with no targeted therapy. We used a mouse model of sepsis-related cognitive impairment to examine the role of lncRNA nuclear enriched abundant transcript 1 (Neat1) in SAE. We observed that Neat1 expression was increased in neuronal cells from septic mice and that it directly interacts with hemoglobin subunit beta (Hbb), preventing its degradation. The Neat1/Hbb axis suppressed postsynaptic density protein 95 (PSD-95) levels and decreased dendritic spine density. Neat1 knockout mice exhibited decreased Hbb levels, which resulted in increased PSD-95 levels, increased neuronal dendritic spine density, and decreased anxiety and memory impairment. Neat1 silencing via the antisense oligonucleotide GapmeR ameliorated anxiety-like behavior and cognitive impairment post-sepsis. In conclusion, we uncovered a previously unknown mechanism of the Neat1/Hbb axis in regulating neuronal dysfunction, which may lead to a novel treatment strategy for SAE.
Asunto(s)
MicroARNs , ARN Largo no Codificante , Sepsis , Animales , Modelos Animales de Enfermedad , Subunidades de Hemoglobina , Ratones , Ratones Noqueados , MicroARNs/genética , ARN Largo no Codificante/genética , ARN Largo no Codificante/metabolismo , Sepsis/complicaciones , Sepsis/genéticaRESUMEN
Brain pericytes regulate cerebral blood flow, maintain the integrity of the blood-brain barrier (BBB), and facilitate the removal of amyloid ß (Aß), which is critical to healthy brain activity. Pericyte loss has been observed in brains from patients with Alzheimer's disease (AD) and animal models. Our previous data demonstrated that friend leukemia virus integration 1 (Fli-1), an erythroblast transformation-specific (ETS) transcription factor, governs pericyte viability in murine sepsis; however, the role of Fli-1 and its impact on pericyte loss in AD remain unknown. Here, we demonstrated that Fli-1 expression was up-regulated in postmortem brains from a cohort of human AD donors and in 5xFAD mice, which corresponded with a decreased pericyte number, elevated inflammatory mediators, and increased Aß accumulation compared with cognitively normal individuals and wild-type (WT) mice. Antisense oligonucleotide Fli-1 Gapmer administered via intrahippocampal injection decelerated pericyte loss, decreased inflammatory response, ameliorated cognitive deficits, improved BBB dysfunction, and reduced Aß deposition in 5xFAD mice. Fli-1 Gapmer-mediated inhibition of Fli-1 protected against Aß accumulation-induced human brain pericyte apoptosis in vitro. Overall, these studies indicate that Fli-1 contributes to pericyte loss, inflammatory response, Aß deposition, vascular dysfunction, and cognitive decline, and suggest that inhibition of Fli-1 may represent novel therapeutic strategies for AD.
Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Proteína Proto-Oncogénica c-fli-1/metabolismo , Enfermedad de Alzheimer/genética , Enfermedad de Alzheimer/metabolismo , Péptidos beta-Amiloides/genética , Péptidos beta-Amiloides/metabolismo , Animales , Encéfalo/metabolismo , Cognición , Disfunción Cognitiva/genética , Disfunción Cognitiva/metabolismo , Humanos , Ratones , Ratones Transgénicos , Pericitos/metabolismoRESUMEN
CONTEXT.: Assessing direct oral anticoagulant (DOAC) drug levels by reliable laboratory assays is necessary in a number of clinical scenarios. OBJECTIVE.: To evaluate the performance of DOAC-specific assays for various concentrations of dabigatran and rivaroxaban, assess the interlaboratory variability in measurement of these DOACs, and investigate the responsiveness of the routine clotting assays to various concentrations of these oral anticoagulants. DESIGN.: College of American Pathologists proficiency testing survey data from 2013 to 2016 were summarized and analyzed. RESULTS.: For dabigatran, the interlaboratory coefficient of variation (CV) of ecarin chromogenic assay was broad (ranging from 7.5% to 29.1%, 6.3% to 15.5%, and 6.8% to 9.0% for 100-ng/mL, 200-ng/mL, and 400-ng/mL targeted drug concentrations, respectively). The CV for diluted thrombin time for dabigatran was better overall (ranging from 11.6% to 17.2%, 9.3% to 12.3, and 7.1% to 11.2% for 100 ng/mL, 200 ng/mL, and 400 ng/mL, respectively). The rivaroxaban-calibrated anti-Xa assay CVs also showed variability (ranging from 11.5% to 22.2%, 7.2% to 10.9%, and 6.4% to 8.1% for 50-ng/mL, 200-ng/mL, and 400-ng/mL targeted drug concentrations, respectively). The prothrombin time (PT) and activated partial thromboplastin time (aPTT) showed variable dose- and reagent-dependent responsiveness to DOACs: PT was more responsive to rivaroxaban and aPTT to dabigatran. The undiluted thrombin time showed maximum prolongation across all 3 dabigatran concentrations, making it too sensitive for drug-level monitoring, but supporting its use as a qualitative screening assay. CONCLUSIONS.: DOAC-specific assays performed reasonably well. While PT and aPTT cannot be used safely to determine DOAC degree of anticoagulation, a normal thrombin time excludes the presence of dabigatran.
Asunto(s)
Dabigatrán , Rivaroxabán , Administración Oral , Anticoagulantes/farmacología , Anticoagulantes/uso terapéutico , Antitrombinas/farmacología , Pruebas de Coagulación Sanguínea/métodos , Dabigatrán/farmacología , Humanos , Tiempo de Tromboplastina Parcial , Pirazoles , Piridonas , Rivaroxabán/farmacologíaRESUMEN
Aim: Missing data cause problems through decreasing sample size and the potential for introducing bias. We tested four missing data methods on the Sequential Organ Failure Assessment (SOFA) score, an intensive care research severity adjuster. Methods: Simulation study using 2015-2017 electronic health record data, where the complete dataset was sampled, missing SOFA score elements imposed and performance examined of four missing data methods - complete case analysis, median imputation, zero imputation (recommended by SOFA score creators) and multiple imputation (MI) - on the outcome of in-hospital mortality. Results: MI performed well, whereas other methods introduced varying amounts of bias or decreased sample size. Conclusion: We recommend using MI in analyses where SOFA score component values are missing in administrative data research.
Asunto(s)
Registros Electrónicos de Salud , Puntuaciones en la Disfunción de Órganos , Humanos , Unidades de Cuidados Intensivos , Método de Montecarlo , Estudios RetrospectivosRESUMEN
BACKGROUND: Understanding COVID-19 epidemiology is crucial to clinical care and to clinical trial design and interpretation. OBJECTIVE: To describe characteristics, treatment, and outcomes among patients hospitalized with COVID-19 early in the pandemic. METHODS: A retrospective cohort study of consecutive adult patients with laboratory-confirmed, symptomatic SARS-CoV-2 infection admitted to 57 US hospitals from March 1 to April 1, 2020. RESULTS: Of 1480 inpatients with COVID-19, median (IQR) age was 62.0 (49.4-72.9) years, 649 (43.9%) were female, and 822 of 1338 (61.4%) were non-White or Hispanic/Latino. Intensive care unit admission occurred in 575 patients (38.9%), mostly within 4 days of hospital presentation. Respiratory failure affected 583 patients (39.4%), including 284 (19.2%) within 24 hours of hospital presentation and 413 (27.9%) who received invasive mechanical ventilation. Median (IQR) hospital stay was 8 (5-15) days overall and 15 (9-24) days among intensive care unit patients. Hospital mortality was 17.7% (n = 262). Risk factors for hospital death identified by penalized multivariable regression included older age; male sex; comorbidity burden; symptoms-to-admission interval; hypotension; hypoxemia; and higher white blood cell count, creatinine level, respiratory rate, and heart rate. Of 1218 survivors, 221 (18.1%) required new respiratory support at discharge and 259 of 1153 (22.5%) admitted from home required new health care services. CONCLUSIONS: In a geographically diverse early-pandemic COVID-19 cohort with complete hospital folllow-up, hospital mortality was associated with older age, comorbidity burden, and male sex. Intensive care unit admissions occurred early and were associated with protracted hospital stays. Survivors often required new health care services or respiratory support at discharge.
Asunto(s)
COVID-19 , Anciano , COVID-19/terapia , Femenino , Mortalidad Hospitalaria , Hospitalización , Humanos , Unidades de Cuidados Intensivos , Masculino , Persona de Mediana Edad , Pandemias , Respiración Artificial , Estudios Retrospectivos , SARS-CoV-2RESUMEN
Scientists and laypeople have been captivated by the potential role of vitamin C in the treatment of infection since the publication of Linus Pauling's famed book, Vitamin C and the Common Cold in 1970.1 In a contemporaneous article, Pauling described the methodology he used to pool the results of four double-blind controlled trials to conclude that ascorbic acid prevented upper respiratory infections.2 Although these publications led to widespread use of vitamin C by the public, they were considered highly controversial by the scientific community and sparked debate over the veracity of Pauling's findings and the rigor of his meta-analysis methodology.
Asunto(s)
Resfriado Común , Sepsis , Masculino , Humanos , Ácido Ascórbico , Vitaminas , Resfriado Común/prevención & control , Método Doble Ciego , Ensayos Clínicos Controlados Aleatorios como AsuntoRESUMEN
OBJECTIVE: Studies suggest superior outcomes with use of intravenous (IV) balanced fluids compared to normal saline (NS). However, significant fluid prescribing variability persists, highlighting the knowledge-to-practice gap. We sought to identify contributors to prescribing variation and utilize a clinical decision support system (CDSS) to increase institutional balanced fluid prescribing. MATERIALS AND METHODS: This single-center informatics-enabled quality improvement initiative for patients hospitalized or treated in the emergency department included stepwise interventions of 1) identification of design factors within the computerized provider order entry (CPOE) of our electronic health record (EHR) that contribute to preferential NS ordering, 2) clinician education, 3) fluid stocking modifications, 4) re-design and implementation of a CDSS-integrated IV fluid ordering panel, and 5) comparison of fluid prescribing before and after the intervention. EHR-derived prescribing data was analyzed via single interrupted time series. RESULTS: Pre-intervention (3/2019-9/2019), balanced fluids comprised 33% of isotonic fluid orders, with gradual uptake (1.4%/month) of balanced fluid prescribing. Clinician education (10/2019-2/2020) yielded a modest (4.4%/month, 95% CI 1.6-7.2, p = 0.01) proportional increase in balanced fluid prescribing, while CPOE redesign (3/2020) yielded an immediate (20.7%, 95% CI 17.7-23.6, p < 0.0001) and sustained increase (72% of fluid orders in 12/2020). The intervention proved most effective among those with lower baseline balanced fluids utilization, including emergency medicine (57% increase, 95% CI 0.7-1.8, p < 0.0001) and internal medicine/subspecialties (18% increase, 95% CI 14.4-21.3, p < 0.0001) clinicians and substantially reduced institutional prescribing variation. CONCLUSION: Integration of CDSS into an EHR yielded a robust and sustained increase in balanced fluid prescribing. This impact far exceeded that of clinician education highlighting the importance of CDSS.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Sistemas de Entrada de Órdenes Médicas , Registros Electrónicos de Salud , Humanos , Mejoramiento de la CalidadRESUMEN
BACKGROUND: Family members of patients admitted to the ICU experience a constellation of sequelae described as postintensive care syndrome-family. The influence that an inter-ICU transfer has on psychological outcomes is unknown. RESEARCH QUESTION: Is inter-ICU transfer associated with poor psychological outcomes in families of patients with acute respiratory failure? STUDY DESIGN AND METHODS: Cross-sectional observational study of 82 families of patients admitted to adult ICUs (tertiary hospital). Data included demographics, admission source, and outcomes. Admission source was classified as inter-ICU transfer (n = 39) for patients admitted to the ICU from other hospitals and direct admit (n = 43) for patients admitted from the ED or the operating room of the same hospital. We used quantitative surveys to evaluate psychological distress (Hospital Anxiety and Depression Scale [HADS]) and posttraumatic stress (Post-Traumatic Stress Scale; PTSS) and examined clinical, family, and satisfaction factors associated with psychological outcomes. RESULTS: Families of transferred patients travelled longer distances (mean ± SD, 109 ± 106 miles) compared with those of patients directly admitted (mean ± SD, 65 ± 156 miles; P ≤ .0001). Transferred patients predominantly were admitted to the neuro-ICU (64%), had a longer length of stay (direct admits: mean ± SD, 12.7 ± 9.3 days; transferred patients: mean ± SD, 17.6 ± 9.3 days; P < .01), and a higher number of ventilator days (direct admits: mean ± SD, 6.9 ± 8.6 days; transferred: mean ± SD, 10.6 ± 9.0 days; P < .01). Additionally, they were less likely to be discharged home (direct admits, 63%; transferred, 33%; P = .08). In a fully adjusted model of psychological distress and posttraumatic stress, family members of transferred patients were found to have a 1.74-point (95% CI, -1.08 to 5.29; P = .30) higher HADS score and a 5.19-point (95% CI, 0.35-10.03; P = .03) higher PTSS score than those of directly admitted family members. INTERPRETATION: In this exploratory study, posttraumatic stress measured by the PTSS was higher in the transferred families, but these findings will need to be replicated to infer clinical significance.