RESUMEN
Data capture systems that acquire continuous hospital-based electrocardiographic (ECG) and physiologic (vital signs) data can foster robust research (i.e., large sample sizes from consecutive patients). However, the application of these systems and the data generated are complex and requires careful human oversight to ensure that accurate and high quality data are procured. This technical article will describe two different data capture systems created by our research group designed to examine false alarms associated with alarm fatigue in nurses. The following aspects regarding these data capture systems will be discussed: (1) history of development; (2) summary of advantages, challenges, and important considerations; (3) their use in research; (4) their use in clinical care; and (5) future developments.
Asunto(s)
Electrocardiografía , Humanos , Alarmas Clínicas , Electrocardiografía/métodos , Monitoreo Fisiológico/métodosRESUMEN
OBJECTIVES: To evaluate the relationship between early IV fluid volume and hospital outcomes, including death in-hospital or discharge to hospice, in septic patients with and without heart failure (HF). DESIGN: A retrospective cohort study using logistic regression with restricted cubic splines to assess for nonlinear relationships between fluid volume and outcomes, stratified by HF status and adjusted for propensity to receive a given fluid volume in the first 6 hours. An ICU subgroup analysis was performed. Secondary outcomes of vasopressor use, mechanical ventilation, and length of stay in survivors were assessed. SETTING: An urban university-based hospital. PATIENTS: A total of 9613 adult patients were admitted from the emergency department from 2012 to 2021 that met electronic health record-based Sepsis-3 criteria. Preexisting HF diagnosis was identified by the International Classification of Diseases codes. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: There were 1449 admissions from patients with HF. The relationship between fluid volume and death or discharge to hospice was nonlinear in patients without HF, and approximately linear in patients with HF. Receiving 0-15 mL/kg in the first 6 hours was associated with lower likelihood of death or discharge to hospice compared with 30-45 mL/kg (odds ratio = 0.61; 95% CI, 0.41-0.90; p = 0.01) in HF patients, but no significant difference for non-HF patients. A similar pattern was identified in ICU admissions and some secondary outcomes. Volumes larger than 15-30 mL/kg for non-HF patients and 30-45 mL/kg for ICU-admitted non-HF patients were not associated with improved outcomes. CONCLUSIONS: Early fluid resuscitation showed distinct patterns of potential harm and benefit between patients with and without HF who met Sepsis-3 criteria. Restricted cubic splines analysis highlighted the importance of considering nonlinear fluid outcomes relationships and identified potential points of diminishing returns (15-30 mL/kg across all patients without HF and 30-45 mL/kg when admitted to the ICU). Receiving less than 15 mL/kg was associated with better outcomes in HF patients, suggesting small volumes may be appropriate in select patients. Future studies may benefit from investigating nonlinear fluid-outcome associations and a focus on other conditions like HF.
Asunto(s)
Fluidoterapia , Insuficiencia Cardíaca , Sepsis , Humanos , Estudios Retrospectivos , Masculino , Femenino , Insuficiencia Cardíaca/terapia , Insuficiencia Cardíaca/mortalidad , Anciano , Persona de Mediana Edad , Fluidoterapia/métodos , Sepsis/mortalidad , Sepsis/terapia , Estudios de Cohortes , Mortalidad Hospitalaria , Unidades de Cuidados Intensivos , Tiempo de InternaciónRESUMEN
BACKGROUND: Closing the gap between evidence-supported antibiotic use and real-world prescribing among clinicians is vital for curbing excessive antibiotic use, which fosters antimicrobial resistance and exposes patients to antimicrobial side effects. Providing prescribing information via scorecard improves clinician adherence to quality metrics. OBJECTIVE: We aimed to delineate actionable, relevant antimicrobial prescribing metrics extractable from the electronic health record in an automated way. DESIGN: We used a modified Delphi consensus-building approach. SETTINGS AND PARTICIPANTS: Our study entailed two iterations of an electronic survey disseminated to hospital medicine physicians at 10 academic medical centers nationwide. MAIN OUTCOMES AND MEASURES: Main outcomes comprised consensus metrics describing the quality of antibiotic prescribing to hospital medicine physicians. RESULTS: Twenty-eight participants from 10 United States institutions completed the first survey version containing 38 measures. Sixteen respondents completed the second survey, which contained 37 metrics. Sixteen metrics, which were modified based on qualitative survey feedback, met criteria for inclusion in the final scorecard. Metrics considered most relevant by hospitalists focused on the appropriate de-escalation of antimicrobial therapy, selection of guideline-concordant antibiotics, and appropriate duration of treatment for common infectious syndromes. Next steps involve prioritization and implementation of these metrics based on quality gaps at our institution, focus groups exploring impressions of clinicians who receive a scorecard, and analysis of antibiotic prescribing patterns before and after metric implementation. Other institutions may be able to implement metrics from this scorecard based on their own quality gaps to provide hospitalists with automated feedback related to antibiotic prescribing.
Asunto(s)
Antibacterianos , Técnica Delphi , Médicos Hospitalarios , Humanos , Antibacterianos/uso terapéutico , Encuestas y Cuestionarios , Estados Unidos , Pautas de la Práctica en Medicina/estadística & datos numéricos , Pautas de la Práctica en Medicina/normas , Programas de Optimización del Uso de los Antimicrobianos , Registros Electrónicos de SaludRESUMEN
Objective: Recent clinical guidelines for sepsis management emphasize immediate antibiotic initiation for suspected septic shock. Though hypotension is a high-risk marker of sepsis severity, prior studies have not considered the precise timing of hypotension in relation to antibiotic initiation and how clinical characteristics and outcomes may differ. Our objective was to evaluate antibiotic initiation in relation to hypotension to characterize differences in sepsis presentation and outcomes in patients with suspected septic shock. Methods: Adults presenting to the emergency department (ED) June 2012-December 2018 diagnosed with sepsis (Sepsis-III electronic health record [EHR] criteria) and hypotension (non-resolving for ≥30 min, systolic blood pressure <90 mmHg) within 24 h. We categorized patients who received antibiotics before hypotension ("early"), 0-60 min after ("immediate"), and >60 min after ("late") treatment. Results: Among 2219 patients, 55% received early treatment, 13% immediate, and 32% late. The late subgroup often presented to the ED with hypotension (median 0 min) but received antibiotics a median of 191 min post-ED presentation. Clinical characteristics notable for this subgroup included higher prevalence of heart failure and liver disease (p < 0.05) and later onset of systemic inflammatory response syndrome (SIRS) criteria compared to early/immediate treatment subgroups (median 87 vs. 35 vs. 20 min, p < 0.0001). After adjustment, there was no difference in clinical outcomes among treatment subgroups. Conclusions: There was significant heterogeneity in presentation and timing of antibiotic initiation for suspected septic shock. Patients with later treatment commonly had hypotension on presentation, had more hypotension-associated comorbidities, and developed overt markers of infection (eg, SIRS) later. While these factors likely contribute to delays in clinician recognition of suspected septic shock, it may not impact sepsis outcomes.
RESUMEN
The objectives of this study were to provide insights on how injury risk is influenced by occupant demographics such as sex, age, and size; and to quantify differences within the context of commonly-occurring real-world crashes. The analyses were confined to either single-event collisions or collisions that were judged to be well-defined based on the absence of any significant secondary impacts. These analyses, including both logistic regression and descriptive statistics, were conducted using the Crash Investigation Sampling System for calendar years 2017 to 2021. In the case of occupant sex, the findings agree with those of many recent investigations that have attempted to quantify the circumstances in which females show elevated rates of injury relative to their male counterparts given the same level bodily insult. This study, like others, provides evidence of certain female-specific injuries. The most problematic of these are AIS 2+ and AIS 3+ upper-extremity and lower-extremity injuries. These are among the most frequently observed injuries for females, and their incidence is consistently greater than for males. Overall, the odds of females sustaining MAIS 3+ (or fatality) are 4.5% higher than the odds for males, while the odds of females sustaining MAIS 2+ (or fatality) are 33.9% higher than those for males. The analyses highlight the need to carefully control for both the vehicle occupied, and the other involved vehicle, when calculating risk ratios by occupant sex. Female driver preferences in terms of vehicle class/size differ significantly from those of males, with females favoring smaller, lighter vehicles.
Asunto(s)
Accidentes de Tránsito , Heridas y Lesiones , Humanos , Accidentes de Tránsito/estadística & datos numéricos , Femenino , Masculino , Adulto , Persona de Mediana Edad , Heridas y Lesiones/epidemiología , Adulto Joven , Adolescente , Anciano , Factores Sexuales , Niño , Factores de Riesgo , Preescolar , Lactante , Escala Resumida de Traumatismos , Factores de Edad , IncidenciaRESUMEN
With the current trend of including the evaluation of the risk of brain injuries in vehicle crashes due to rotational kinematics of the head, two injury criteria have been introduced since 2013 - BrIC and DAMAGE. BrIC was developed by NHTSA in 2013 and was suggested for inclusion in the US NCAP for frontal and side crashes. DAMAGE has been developed by UVa under the sponsorship of JAMA and JARI and has been accepted tentatively by the EuroNCAP. Although BrIC in US crash testing is known and reported, DAMAGE in tests of the US fleet is relatively unknown. The current paper will report on DAMAGE in NCAP-like tests and potential future frontal crash tests involving substantial rotation about the three axes of occupant heads. Distribution of DAMAGE of three-point belted occupants without airbags will also be discussed. Prediction of brain injury risks from the tests have been compared to the risks in the real world. Although DAMAGE correlates well with MPS in the human brain model across several test scenarios, the predicted risk of AIS2+ brain injuries are too high compared to real-world experience. The prediction of AIS4+ brain injury risk in lower velocity crashes is good, but too high in NCAP-like and high speed angular frontal crashes.
Asunto(s)
Accidentes de Tránsito , Algoritmos , Humanos , Fenómenos Biomecánicos , Lesiones Encefálicas , Medición de Riesgo , Cinturones de SeguridadRESUMEN
BACKGROUND: Patients with limited English proficiency (LEP) may have worse health outcomes and differences in processes of care. Language status may particularly affect situations that depend on communication, such as symptom management or end-of-life (EOL) care. OBJECTIVE: The objective of this study was to assess whether opioid prescribing and administration differs by English proficiency (EP) status among hospitalized patients receiving EOL care. METHODS: This single-center retrospective study identified all adult patients receiving "comfort care" on the general medicine service from January 2013 to September 2021. We assessed for differences in the quantity of opioids administered (measured by oral morphine equivalents [OME]) by patient LEP status using multivariable linear regression, controlling for other patient and medical factors. RESULTS: We identified 2652 patients receiving comfort care at our institution during the time period, of whom 1813 (68%) died during the hospitalization. There were no significant differences by LEP status in terms of mean OME per day (LEP received 30.8 fewer OME compared to EP, p = .91) or in the final 24 h before discharge (LEP received 61.7 more OME compared to EP, p = .80). CONCLUSION: LEP was not associated with differences in the amount of opioids received for patients whose EOL management involved standardized order sets for symptom management at our hospital.
Asunto(s)
Analgésicos Opioides , Dominio Limitado del Inglés , Cuidado Terminal , Humanos , Analgésicos Opioides/uso terapéutico , Analgésicos Opioides/administración & dosificación , Estudios Retrospectivos , Masculino , Femenino , Anciano , Persona de Mediana Edad , Pacientes Internos , Hospitalización , Cuidados PaliativosRESUMEN
OBJECTIVE: The objective of this study was to estimate strains in the human brain in regulatory, research, and due care frontal crashes by simulating those impacts. In addition, brain strain simulations were estimated for belted human volunteer tests and in impacts between two players in National Football League (NFL), some with no injury and some with mild Traumatic Brain Injuries (mTBI). METHODS: The brain strain responses were determined using version 5 of the Global Human Body Modeling Consortium (GHBMC) 50th percentile human brain model. One hundred and sixty simulations with the brain model were conducted using rotational velocities and accelerations of Anthropomorphic Test Devices (ATD's) or those of human volunteers in sled or crash tests, as inputs to the model and strain related responses like Maximum Principal Strains (MPS) and Cumulative Strain Damage Measure (CSDM) in various regions of the brain were monitored. The simulated vehicle tests ranged from sled tests at 24 and 32 kph delta-V with three-point belts without airbags to full scale crash and sled tests at 56 kph and a series of Research Mobile Deformable Barrier (RMDB) tests described in Prasad et al. RESULTS: The severity of rotational input into the model as represented by BrIC, averaged between 0.5 and 1.2 for the various test conditions, and as high as 1.5 for an individual case. The MPS responses for the various test conditions averaged between 0.28 and 0.86 and as high as 1.3 in one test condition. The MPS responses in the brain for volunteers, low velocity sled, and NCAP tests were similar to those in the no-mTBI group in the NFL cases and consistent with real world accident data. The MPS responses of the brain in angular crash and sled tests were similar to those in the mTBI group. CONCLUSIONS: The brain strain estimations do not indicate the likelihood of severe-to-fatal brain injuries in the crash environments studied in this paper. However, using the risk functions associated with BrIC, severe-to-fatal brain injuries (AIS4+) are predicted in several environments in which they are not observed or expected.
Asunto(s)
Airbags , Lesiones Encefálicas , Humanos , Accidentes de Tránsito , Aceleración , Encéfalo , Fenómenos BiomecánicosRESUMEN
Background: Continuous electrocardiographic (ECG) monitoring is used to identify ventricular tachycardia (VT), but false alarms occur frequently. Objective: The purpose of this study was to assess the rate of 30-day in-hospital mortality associated with VT alerts generated from bedside ECG monitors to those from a new algorithm among intensive care unit (ICU) patients. Methods: We conducted a retrospective cohort study in consecutive adult ICU patients at an urban academic medical center and compared current bedside monitor VT alerts, VT alerts from a new-unannotated algorithm, and true-annotated VT. We used survival analysis to explore the association between VT alerts and mortality. Results: We included 5679 ICU admissions (mean age 58 ± 17 years; 48% women), 503 (8.9%) experienced 30-day in-hospital mortality. A total of 30.1% had at least 1 current bedside monitor VT alert, 14.3% had a new-unannotated algorithm VT alert, and 11.6% had true-annotated VT. Bedside monitor VT alert was not associated with increased rate of 30-day mortality (adjusted hazard ratio [aHR] 1.06; 95% confidence interval [CI] 0.88-1.27), but there was an association for VT alerts from our new-unannotated algorithm (aHR 1.38; 95% CI 1.12-1.69) and true-annotated VT(aHR 1.39; 95% CI 1.12-1.73). Conclusion: Unannotated and annotated-true VT were associated with increased rate of 30-day in-hospital mortality, whereas current bedside monitor VT was not. Our new algorithm may accurately identify high-risk VT; however, prospective validation is needed.
RESUMEN
Introduction: Opioid administration is extremely common in the inpatient setting, yet we do not know how the administration of opioids varies across different medical conditions and patient characteristics on internal medicine services. Our goal was to assess racial, ethnic, and language-based inequities in opioid prescribing practices for patients admitted to internal medicine services. Methods: We conducted a retrospective cohort study of all adult patients admitted to internal medicine services from 2013 to 2021 and identified subcohorts of patients treated for the six most frequent primary hospital conditions (pneumonia, sepsis, cellulitis, gastrointestinal bleed, pyelonephritis/urinary tract infection, and respiratory disease) and three select conditions typically associated with pain (abdominal pain, acute back pain, and pancreatitis). We conducted a negative binomial regression analysis to determine how average administered daily opioids, measured as morphine milligram equivalents (MMEs), were associated with race, ethnicity, and language, while adjusting for additional patient demographics, hospitalization characteristics, medical comorbidities, prior opioid therapy, and substance use disorders. Results: The study cohort included 61,831 patient hospitalizations. In adjusted models, we found that patients with limited English proficiency received significantly fewer opioids (66 MMEs, 95% CI: 52, 80) compared to English-speaking patients (101 MMEs, 95% CI: 91, 111). Asian (59 MMEs, 95% CI: 51, 66), Latinx (89 MMEs, 95% CI: 79, 100), and multi-race/ethnicity patients (81 MMEs, 95% CI: 65, 97) received significantly fewer opioids compared to white patients (103 MMEs, 95% CI: 94, 112). American Indian/Alaska Native (227 MMEs, 95% CI: 110, 344) patients received significantly more opioids. Significant inequities were also identified across race, ethnicity, and language groups when analyses were conducted within the subcohorts. Most notably, Asian and Latinx patients received significantly fewer MMEs and American Indian/Alaska Native patients received significantly more MMEs compared to white patients for the top six most frequent conditions. Most patients from minority groups also received fewer MMEs compared to white patients for three select pain conditions. Discussion. There are notable inequities in opioid prescribing based on patient race, ethnicity, and language status for those admitted to inpatient internal medicine services across all conditions and in the subcohorts of the six most frequent hospital conditions and three pain-associated conditions. This represents an institutional and societal opportunity for quality improvement initiatives to promote equitable pain management.
Asunto(s)
Analgésicos Opioides , Pacientes Internos , Adulto , Humanos , Analgésicos Opioides/uso terapéutico , Estudios Retrospectivos , Prescripciones de Medicamentos , Pautas de la Práctica en Medicina , Dolor Abdominal , Dolor Postoperatorio/tratamiento farmacológicoRESUMEN
Importance: Extending the duration of oral anticoagulation for venous thromboembolism (VTE) beyond the initial 3 to 6 months of treatment is often recommended, but it is not clear whether clinical outcomes differ when using direct oral anticoagulants (DOACs) or warfarin. Objective: To compare rates of recurrent VTE, hospitalizations for hemorrhage, and all-cause death among adults prescribed DOACs or warfarin whose anticoagulant treatment was extended beyond 6 months after acute VTE. Design, Setting, and Participants: This cohort study was conducted in 2 integrated health care delivery systems in California with adults aged 18 years or older who received a diagnosis of incident VTE between 2010 and 2018 and completed at least 6 months of oral anticoagulant treatment with DOACs or warfarin. Patients were followed from the end of the initial 6-month treatment period until discontinuation of anticoagulation, occurrence of an outcome event, health plan disenrollment, or end of the study follow-up period (December 31, 2019). Data were obtained from the Kaiser Permanente Virtual Data Warehouse and electronic health records. Data analysis was conducted from March 2022 to January 2023. Exposure: Dispensed prescriptions of DOACs or warfarin after a 6-month initial treatment for VTE. Main Outcomes and Measures: The primary outcomes were rates per 100 person-years of recurrent VTE, hospitalizations for hemorrhage, and all-cause death. Comparison of DOAC and warfarin outcomes were performed using multivariable Cox proportional hazards regression. Results: A total of 18â¯495 patients (5477 [29.6%] aged ≥75 years; 8973 women [48.5%]) with VTE who were treated with at least 6 months of anticoagulation were identified, of whom 2134 (11.5%) were receiving DOAC therapy and 16â¯361 (88.5%) were receiving warfarin therapy. Unadjusted event rates were lower for patients receiving DOAC therapy than warfarin therapy for recurrent VTE (event rate per 100 person-years, 2.92 [95% CI, 2.29-3.54] vs 4.14 [95% CI, 3.90-4.38]), hospitalizations for hemorrhage (event rate per 100 person-years, 1.02 [95% CI, 0.66-1.39] vs 1.81 [95% CI, 1.66-1.97]), and all-cause death (event rate per 100 person-years, 3.79 [95% CI, 3.09-4.49] vs 5.40 [95% CI, 5.13-5.66]). After multivariable adjustment, DOAC treatment was associated with a lower risk of recurrent VTE (adjusted hazard ratio [aHR], 0.66; 95% CI, 0.52-0.82). For patients prescribed DOAC treatment, the risks of hospitalization for hemorrhage (aHR, 0.79; 95% CI, 0.54-1.17) and all-cause death (aHR, 0.96; 95% CI, 0.78-1.19) were not significantly different than those for patients prescribed warfarin treatment. Conclusions and Relevance: In this cohort study of patients with VTE who continued warfarin or DOAC anticoagulation beyond 6 months, DOAC treatment was associated with a lower risk of recurrent VTE, supporting the use of DOACs for the extended treatment of VTE in terms of clinical outcomes.
Asunto(s)
Tromboembolia Venosa , Warfarina , Adulto , Humanos , Femenino , Warfarina/efectos adversos , Tromboembolia Venosa/tratamiento farmacológico , Tromboembolia Venosa/epidemiología , Estudios de Cohortes , Anticoagulantes/efectos adversos , Hemorragia/inducido químicamente , Hemorragia/epidemiologíaRESUMEN
Importance: Physicians who attempt to continue breastfeeding after returning from childbearing leave identify numerous obstacles at work, which may affect job satisfaction, retention, and the diversity of the physician workforce. Objective: To study the association between improved lactation accommodation support and physician satisfaction. Design, Setting, and Participants: This cohort study compared the physician experience before and after a July 2020 intervention to improve physician lactation accommodation support at a large, urban, academic health system. The satisfaction of physicians returning from childbearing leave between July 1, 2018, and June 30, 2020 (preintervention), was compared with that of physicians returning from leave between July 1, 2020, and November 30, 2021 (postintervention). Initial data analysis was performed on February 22, 2022, with additional tests for interaction performed on May 18, 2023. Intervention: The intervention included creating functional lactation spaces, redesigning communication regarding lactation resources, establishing physician-specific lactation policies, and developing a program to reimburse faculty for time spent expressing breastmilk in the ambulatory setting. Main Outcomes and Measures: The main outcomes were (1) space improvements, use, and costs of the lactation accommodation program and (2) an ad hoc survey of physicians' reported experience with lactation accommodation support before and after the intervention. Survey data were collected using a 5-point Likert scale to assess physician perceptions of institutional support. Responses collected during the preintervention period were compared with those collected during the postintervention period using unpaired t tests. Results: In this study, 70 clinical faculty (mean [SD] age, 34.4 [2.9] years) took childbearing leave in the preintervention period compared with 52 (mean [SD] age, 34.8 [2.7] years) in the postintervention period. Fifty-eight physicians (83%) completed the preintervention survey and 48 completed the postintervention survey. When comparing the pre- and postintervention periods, faculty reported improvements in finding time in their clinical schedule to devote to pumping (mean [SD] response, 2.5 [1.3] vs 3.6 [1.5]; P < .001), initiatives to address the impact of lactation time on productivity (mean [SD] response, 2.0 [1.0] vs 3.0 [1.5]; P = .001), and a culture supportive of lactation (mean [SD] response, 2.8 [1.4] vs 3.4 [1.3]; P = .047). Forty childbearing faculty took advantage of lactation time reimbursement and were reimbursed a total of $242â¯744.37. Faculty whose return to work overlapped with the entire year of the study received financial support for lactation for a mean (SD) of 8.9 (0.2) months, with an average reimbursement of $9125.78. Conclusions and Relevance: The findings of this cohort study suggest that a multifaceted intervention to combat common challenges in lactation support in academic medical centers yielded improvements in faculty perceptions of institutional support for pumping breastmilk, addressing the impact of lactation time on productivity, and providing a culture supportive of lactation. These findings support the adoption of interventions to improve physician lactation accommodations.
Asunto(s)
Lactancia Materna , Médicos , Femenino , Humanos , Adulto , Estudios de Cohortes , Docentes , LactanciaRESUMEN
INTRODUCTION: Approximately 47% of women with an episode of preterm labor deliver at term; however, their infants are at greater risk of being small for gestational age and for neurodevelopmental disorders. In these cases, a pathologic insult may disrupt the homeostatic responses sustaining pregnancy. We tested the hypothesis of an involvement of components of the insulin-like growth factor (IGF) system. METHODS: This is a cross-sectional study in which maternal plasma concentrations of pregnancy-associated plasma protease (PAPP)-A, PAPP-A2, insulin-like growth factor-binding protein 1 (IGFBP-1), and IGFBP-4 were determined in the following groups of women: (1) no episodes of preterm labor, term delivery (controls, n = 100); (2) episode of preterm labor, term delivery (n = 50); (3) episode of preterm labor, preterm delivery (n = 100); (4) pregnant women at term not in labor (n = 61); and (5) pregnant women at term in labor (n = 61). Pairwise differences in maternal plasma concentrations of PAPP-A, PAPP-A2, IGFBP-1, and IGFBP-4 among study groups were assessed by fitting linear models on log-transformed data and included adjustment for relevant covariates. Significance of the group coefficient in the linear models was assessed via t-scores, with p < 0.05 deemed a significant result. RESULTS: Compared to controls, (1) women with an episode of premature labor, regardless of a preterm or a term delivery, had higher mean plasma concentrations of PAPP-A2 and IGFBP-1 (each p < 0.05); (2) women with an episode of premature labor who delivered at term also had a higher mean concentration of PAPP-A (p < 0.05); and (3) acute histologic chorioamnionitis and spontaneous labor at term were not associated with significant changes in these analytes. CONCLUSION: An episode of preterm labor involves the IGF system, supporting the view that the premature activation of parturition is a pathologic state, even in those women who delivered at term.
Asunto(s)
Corioamnionitis , Trabajo de Parto Prematuro , Somatomedinas , Recién Nacido , Femenino , Embarazo , Humanos , Proteína 4 de Unión a Factor de Crecimiento Similar a la Insulina/metabolismo , Proteína 1 de Unión a Factor de Crecimiento Similar a la Insulina/metabolismo , Estudios Transversales , Proteína Plasmática A Asociada al Embarazo/metabolismo , Trabajo de Parto Prematuro/metabolismo , Corioamnionitis/metabolismo , Somatomedinas/metabolismo , Líquido Amniótico/metabolismoRESUMEN
In-hospital electrocardiographic (ECG) monitors are typically configured to alarm for premature ventricular complexes (PVCs) due to the potential association of PVCs with ventricular tachycardia (VT). However, no contemporary hospital-based studies have examined the association of PVCs with VT. Hence, the benefit of PVC monitoring in hospitalized patients is largely unknown. This secondary analysis used a large PVC alarm data set to determine whether PVCs identified during continuous ECG monitoring were associated with VT, in-hospital cardiac arrest (IHCA), and/or death in a cohort of adult intensive care unit patients. Six PVC types were examined (i.e., isolated, bigeminy, trigeminy, couplets, R-on-T, and run PVCs) and were compared between patients with and without VT, IHCA, and/or death. Of 445 patients, 48 (10.8%) had VT; 11 (2.5%) had IHCA; and 49 (11%) died. Isolated and run PVC counts were higher in the VT group (p = 0.03 both), but group differences were not seen for the other four PVC types. The regression models showed no significant associations between any of the six PVC types and VT or death, although confidence intervals were wide. Due to the small number of cases, we were unable to test for associations between PVCs and IHCA. Our findings suggest that we should question the clinical relevance of activating PVC alarms as a forewarning of VT, and more work should be done with larger sample sizes. A more precise characterization of clinically relevant PVCs that might be associated with VT is warranted.
Asunto(s)
Taquicardia Ventricular , Complejos Prematuros Ventriculares , Adulto , Humanos , Complejos Prematuros Ventriculares/diagnóstico , Taquicardia Ventricular/diagnóstico , ElectrocardiografíaRESUMEN
AIM: Nurses assess patients' pain using several validated tools. It is not known what disparities exist in pain assessment for medicine inpatients. Our purpose was to measure differences in pain assessment across patient characteristics, including race, ethnicity, and language status. METHODS: Retrospective cohort study of adult general medicine inpatients from 2013 to 2021. The primary exposures were race/ethnicity and limited English proficiency (LEP) status. The primary outcomes were 1) the type and odds of which pain assessment tool nursing used and 2) the relationship between pain assessments and daily opioid administration. RESULTS: Of 51,602 patient hospitalizations, 46.1% were white, 17.4% Black, 16.5% Asian, and 13.2% Latino. 13.2% of patients had LEP. The most common pain assessment tool was the Numeric Rating Scale (68.1%), followed by the Verbal Descriptor Scale (23.7%). Asian patients and patients with LEP were less likely to have their pain documented numerically. In multivariable logistic regression, patients with LEP (OR 0.61, 95% CI 0.58-0.65) and Asian patients (OR 0.74, 95% CI 0.70-0.78) had the lowest odds of numeric ratings. Latino, Multi-Racial, and patients classified as Other also had lower odds than white patients of numeric ratings. Asian patients and patients with LEP received the fewest daily opioids across all pain assessment categories. CONCLUSIONS: Asian patients and patients with LEP were less likely than other patient groups to have a numeric pain assessment and received the fewest opioids. These inequities may serve as the basis for the development of equitable pain assessment protocols.
Asunto(s)
Analgésicos Opioides , Etnicidad , Humanos , Adulto , Dimensión del Dolor , Analgésicos Opioides/uso terapéutico , Estudios Retrospectivos , Lenguaje , Dolor/tratamiento farmacológicoRESUMEN
BACKGROUND: Identifying COVID-19 patients at the highest risk of poor outcomes is critical in emergency department (ED) presentation. Sepsis risk stratification scores can be calculated quickly for COVID-19 patients but have not been evaluated in a large cohort. OBJECTIVE: To determine whether well-known risk scores can predict poor outcomes among hospitalized COVID-19 patients. DESIGNS, SETTINGS, AND PARTICIPANTS: A retrospective cohort study of adults presenting with COVID-19 to 156 Hospital Corporation of America (HCA) Healthcare EDs, March 2, 2020, to February 11, 2021. INTERVENTION: Quick Sequential Organ Failure Assessment (qSOFA), Shock Index, National Early Warning System-2 (NEWS2), and quick COVID-19 Severity Index (qCSI) at presentation. MAIN OUTCOME AND MEASURES: The primary outcome was in-hospital mortality. Secondary outcomes included intensive care unit (ICU) admission, mechanical ventilation, and vasopressors receipt. Patients scored positive with qSOFA ≥ 2, Shock Index > 0.7, NEWS2 ≥ 5, and qCSI ≥ 4. Test characteristics and area under the receiver operating characteristics curves (AUROCs) were calculated. RESULTS: We identified 90,376 patients with community-acquired COVID-19 (mean age 64.3 years, 46.8% female). 17.2% of patients died in-hospital, 28.6% went to the ICU, 13.7% received mechanical ventilation, and 13.6% received vasopressors. There were 3.8% qSOFA-positive, 45.1% Shock Index-positive, 49.8% NEWS2-positive, and 37.6% qCSI-positive at ED-triage. NEWS2 exhibited the highest AUROC for in-hospital mortality (0.593, confidence interval [CI]: 0.588-0.597), ICU admission (0.602, CI: 0.599-0.606), mechanical ventilation (0.614, CI: 0.610-0.619), and vasopressor receipt (0.600, CI: 0.595-0.604). CONCLUSIONS: Sepsis severity scores at presentation have low discriminative power to predict outcomes in COVID-19 patients and are not reliable for clinical use. Severity scores should be developed using features that accurately predict poor outcomes among COVID-19 patients to develop more effective risk-based triage.
Asunto(s)
COVID-19 , Sepsis , Adulto , Humanos , Femenino , Persona de Mediana Edad , Masculino , COVID-19/diagnóstico , Estudios Retrospectivos , Sistemas de Atención de Punto , Puntuaciones en la Disfunción de Órganos , Servicio de Urgencia en Hospital , Curva ROC , Pronóstico , Mortalidad Hospitalaria , Unidades de Cuidados IntensivosRESUMEN
OBJECTIVE: This study presents a comparison of the Test Device for Human Occupant Restraint (THOR) 50M and Hybrid III (HIII) 50M anthropomorphic test device (ATD) geometries and rear impact head and neck biofidelity to each other and to postmortem human surrogate (PMHS) data to evaluate the usefulness of the THOR in rear impact testing. METHODS: Both ATDs were scanned in a seated position on a rigid bench seat. A series of rear impact sled tests with the rigid bench seat with no head restraint support were conducted with a HIII-50M at 16 and 24 kph. Tests at each speed were performed twice with the THOR-50M to allow an assessment of the repeatability of the THOR-50M. A comparison of the test results from THOR-50M testing were made to the results of a previous study that included PMHS. Rear impact sled tests with both ATDs in a modern seat were then conducted at 40 kph. RESULTS: The THOR-50M head was 48.4 mm rearward and 60.1 mm higher than the HIII-50M head when seated in the rigid bench seat. In the repeated rigid bench testing at 16 and 24 kph, the THOR-50M head longitudinal and vertical accelerations, upper neck moment, and overall kinematics showed good test-to-test repeatability. In the rigid bench tests, the THOR-50M neck experienced flexion prior to extension in the 16 kph tests, where the neck of the HIII only experienced extension. At 24 kph both ATDs only experienced extension. The THOR-50M head displaced more rearward at both test velocities. The rigid bench tests show that the THOR-50M neck allows for more extension motion or articulation than the HIII-50M neck. The rigid bench test also shows that the head longitudinal and vertical accelerations, angular head kinematics, and upper neck moments were reasonably comparable between the ATDs. The THOR-50M results were closer to the average of the PMHS results than the HIII-50-M results, with the exception of the upper neck. In the 40 kph tests, with a modern seat design, the THOR-50M resulted in more deformation of the seatback with greater head restraint loading than the HIII-50M. The THOR-50M head backset distance was less. CONCLUSION: This study provides insight into the differences and similarities between the THOR and the HIII-50M ATD geometries, instrumentation responses, and kinematics, as well as the repeatability of the THOR-50M in rear impacts testing. The overall geometries of the THOR-50M and the HIII-50M are similar. The seated head position of the THOR-50M is slightly further rearward and higher than the HIII-50M. The results indicate that the THOR-50M matches the PMHS results more closely than the HIII-50M and may have improved neck biofidelity in rear impact testing. The results indicate that the studied THOR-50M responses are repeatable within expected test-to-test variations in rear impacts. Early data suggest that the THOR-50M can be used in rear impact testing, though a more complete understanding of the THOR-50M differences to the HIII ATDs will allow for better correlation to the existing body of HIII rear impact testing.
Asunto(s)
Accidentes de Tránsito , Restricción Física , Humanos , Cadáver , Cabeza/fisiología , Aceleración , Fenómenos Biomecánicos , ManiquíesRESUMEN
Candida albicans is a commensal organism of the human gastrointestinal tract and a prevalent opportunistic pathogen. It exhibits different morphogenic forms to survive in different host niches with distinct environmental conditions (pH, temperature, oxidative stress, nutrients, serum, chemicals, radiation, etc.) and genetic factors (transcription factors and genes). The different morphogenic forms of C. albicans are yeast, hyphal, pseudohyphal, white, opaque, and transient gray cells, planktonic and biofilm forms of cells. These forms differ in the parameters like cellular phenotype, colony morphology, adhesion to solid surfaces, gene expression profile, and the virulent traits. Each form is functionally distinct and responds discretely to the host immune system and antifungal drugs. Hence, morphogenic plasticity is the key to virulence. In this review, we address the characteristics, the pathogenic potential of the different morphogenic forms and the conditions required for morphogenic transitions.
Asunto(s)
Candida albicans , Factores de Transcripción , Humanos , Candida albicans/genética , Factores de Transcripción/genética , Factores de Transcripción/metabolismo , Levaduras/metabolismo , Virulencia/genética , Biopelículas , Hifa/genética , Hifa/metabolismo , Regulación Fúngica de la Expresión GénicaRESUMEN
BACKGROUND: False ventricular tachycardia (VT) alarms are common during in-hospital electrocardiographic (ECG) monitoring. Prior research shows that the majority of false VT can be attributed to algorithm deficiencies. PURPOSE: The purpose of this study was: (1) to describe the creation of a VT database annotated by ECG experts and (2) to determine true vs. false VT using a new VT algorithm created by our group. METHODS: The VT algorithm was processed in 5320 consecutive ICU patients with 572,574 h of ECG and physiologic monitoring. A search algorithm identified potential VT, defined as: heart rate >100 beats/min, QRSs > 120 ms, and change in QRS morphology in >6 consecutive beats compared to the preceding native rhythm. Seven ECG channels, SpO2 , and arterial blood pressure waveforms were processed and loaded into a web-based annotation software program. Five PhD-prepared nurse scientists performed the annotations. RESULTS: Of the 5320 ICU patients, 858 (16.13%) had 22,325 VTs. After three levels of iterative annotations, a total of 11,970 (53.62%) were adjudicated as true, 6485 (29.05%) as false, and 3870 (17.33%) were unresolved. The unresolved VTs were concentrated in 17 patients (1.98%). Of the 3870 unresolved VTs, 85.7% (n = 3281) were confounded by ventricular paced rhythm, 10.8% (n = 414) by underlying BBB, and 3.5% (n = 133) had a combination of both. CONCLUSIONS: The database described here represents the single largest human-annotated database to date. The database includes consecutive ICU patients, with true, false, and challenging VTs (unresolved) and could serve as a gold standard database to develop and test new VT algorithms.
Asunto(s)
Electrocardiografía , Taquicardia Ventricular , Humanos , Taquicardia Ventricular/diagnóstico , Arritmias Cardíacas , Ventrículos Cardíacos , AlgoritmosRESUMEN
Importance: Patients hospitalized with COVID-19 have higher rates of venous thromboembolism (VTE), but the risk and predictors of VTE among individuals with less severe COVID-19 managed in outpatient settings are less well understood. Objectives: To assess the risk of VTE among outpatients with COVID-19 and identify independent predictors of VTE. Design, Setting, and Participants: A retrospective cohort study was conducted at 2 integrated health care delivery systems in Northern and Southern California. Data for this study were obtained from the Kaiser Permanente Virtual Data Warehouse and electronic health records. Participants included nonhospitalized adults aged 18 years or older with COVID-19 diagnosed between January 1, 2020, and January 31, 2021, with follow-up through February 28, 2021. Exposures: Patient demographic and clinical characteristics identified from integrated electronic health records. Main Outcomes and Measures: The primary outcome was the rate per 100 person-years of diagnosed VTE, which was identified using an algorithm based on encounter diagnosis codes and natural language processing. Multivariable regression using a Fine-Gray subdistribution hazard model was used to identify variables independently associated with VTE risk. Multiple imputation was used to address missing data. Results: A total of 398â¯530 outpatients with COVID-19 were identified. The mean (SD) age was 43.8 (15.8) years, 53.7% were women, and 54.3% were of self-reported Hispanic ethnicity. There were 292 (0.1%) VTE events identified over the follow-up period, for an overall rate of 0.26 (95% CI, 0.24-0.30) per 100 person-years. The sharpest increase in VTE risk was observed during the first 30 days after COVID-19 diagnosis (unadjusted rate, 0.58; 95% CI, 0.51-0.67 per 100 person-years vs 0.09; 95% CI, 0.08-0.11 per 100 person-years after 30 days). In multivariable models, the following variables were associated with a higher risk for VTE in the setting of nonhospitalized COVID-19: age 55 to 64 years (HR 1.85 [95% CI, 1.26-2.72]), 65 to 74 years (3.43 [95% CI, 2.18-5.39]), 75 to 84 years (5.46 [95% CI, 3.20-9.34]), greater than or equal to 85 years (6.51 [95% CI, 3.05-13.86]), male gender (1.49 [95% CI, 1.15-1.96]), prior VTE (7.49 [95% CI, 4.29-13.07]), thrombophilia (2.52 [95% CI, 1.04-6.14]), inflammatory bowel disease (2.43 [95% CI, 1.02-5.80]), body mass index 30.0-39.9 (1.57 [95% CI, 1.06-2.34]), and body mass index greater than or equal to 40.0 (3.07 [1.95-4.83]). Conclusions and Relevance: In this cohort study of outpatients with COVID-19, the absolute risk of VTE was low. Several patient-level factors were associated with higher VTE risk; these findings may help identify subsets of patients with COVID-19 who may benefit from more intensive surveillance or VTE preventive strategies.