RESUMO
BACKGROUND AND AIMS: Vitamin D is known to influence the risk of cardiovascular disease, which is a recognized risk factor for sudden cardiac arrest (SCA). However, the relationship between vitamin D and SCA is not well understood. Therefore, this study aims to investigate the association between vitamin D and SCA in out-of-hospital cardiac arrest (OHCA) patients compared to healthy controls. METHODS AND RESULTS: Using the Phase II Cardiac Arrest Pursuit Trial with Unique Registration and Epidemiologic Surveillance (CAPTURES II) registry, a 1:1 propensity score-matched case-control study was conducted between 2017 and 2020. Serum 25-hydroxyvitamin D (vitamin D) levels in patients with OHCA (454 cases) and healthy controls (454 cases) were compared after matching for age, sex, cardiovascular risk factors, and lifestyle behaviors. The mean vitamin D levels were 14.5 ± 7.6 and 21.3 ± 8.3 ng/mL among SCA cases and controls, respectively. Logistic regression analysis was used adjusting for cardiovascular risk factors, lifestyle behaviors, corrected serum calcium levels, and estimated glomerular filtration rate (eGRF). The adjusted odds ratio (aOR) for vitamin D was 0.89 (95% confidence interval [CI] 0.87-0.91). The dose-response relationship demonstrated that vitamin D deficiency was associated with SCA incidence (severe deficiency, aOR 10.87, 95% CI 4.82-24.54; moderate deficiency, aOR 2.24, 95% CI 1.20-4.20). CONCLUSION: Vitamin D deficiency was independently and strongly associated with an increased risk of SCA, irrespective of cardiovascular and lifestyle factors, corrected calcium levels, and eGFR.
Assuntos
Biomarcadores , Morte Súbita Cardíaca , Parada Cardíaca Extra-Hospitalar , Sistema de Registros , Deficiência de Vitamina D , Vitamina D , Humanos , Deficiência de Vitamina D/sangue , Deficiência de Vitamina D/epidemiologia , Deficiência de Vitamina D/complicações , Deficiência de Vitamina D/diagnóstico , Masculino , Feminino , Vitamina D/sangue , Vitamina D/análogos & derivados , Pessoa de Meia-Idade , Estudos de Casos e Controles , Medição de Risco , Idoso , Morte Súbita Cardíaca/epidemiologia , Morte Súbita Cardíaca/etiologia , Morte Súbita Cardíaca/prevenção & controle , Parada Cardíaca Extra-Hospitalar/sangue , Parada Cardíaca Extra-Hospitalar/diagnóstico , Parada Cardíaca Extra-Hospitalar/epidemiologia , Parada Cardíaca Extra-Hospitalar/fisiopatologia , Fatores de Risco , Biomarcadores/sangueRESUMO
PURPOSE: The aim of the study was to investigate the diagnostic accuracy of initial and post-fluid resuscitation lactate levels in predicting 28 day mortality. MATERIALS AND METHODS: We retrospectively analyzed a multi-center registry of suspected septic shock cases that was prospectively collected between October 2015 and December 2018 from 11 Emergency Departments. The primary outcome was 28 day mortality. The diagnostic performance of the initial and post-fluid resuscitation lactate levels as a predictor for 28 day mortality was assessed. RESULTS: A total of 2568 patients were included in the final analysis. The overall 28 day mortality rate was 23%. The area under the receiver operating characteristic curve (AUROC) of initial lactate for predicting 28 day mortality was 0.66 (95% CI, 0.64-0.69) and that of after fluid administration lactate was 0.70 (95% CI, 0.67-0.72), and there was a significant difference (p < 0.001). The optimal cutoff point of lactate after fluid administration was 4.4 mmol/L. Compared with this, the Sepsis-3 definition with a lactate level of 2 mmol/L or more was relatively more sensitive and less specific for predicting 28 day mortality. CONCLUSION: The post-fluid resuscitation lactate level was more accurate than the initial lactate level in predicting 28 day mortality in patients with suspected septic shock.
Assuntos
Serviço Hospitalar de Emergência , Hidratação , Ácido Láctico/sangue , Ressuscitação/métodos , Choque Séptico/mortalidade , Choque Séptico/terapia , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Sistema de Registros , República da Coreia/epidemiologia , Estudos RetrospectivosRESUMO
AIM: We evaluated the relationship between hyperkalemia and wide QRS complex in patients with pulseless electrical activity (PEA) cardiac arrest. METHODS: This was a single-center, retrospective observational study of patients over the age of 18 treated for cardiac arrest at a tertiary referral hospital whose initial electrocardiogram rhythm was PEA from February 2010 to December 2019. Wide QRS PEA was defined as a QRS interval of 120 ms or more. Hyperkalemia was defined as serum potassium level > 5.5 mmol/L. The primary outcome was hyperkalemia. Multivariable logistic regression analysis was used to evaluate the relationship between wide QRS and hyperkalemia. RESULTS: Among 617 patients, we analyzed 111 episodes in the wide QRS group and 506 episodes in the narrow QRS group. The potassium level in the wide QRS group was significantly higher than in the narrow QRS group (5.4 mmol/L, IQR 4.4-6.7 vs. 4.6 mmol/L, IQR 4.0-5.6, P < 0.001). Among all patients, 49.6% (n = 55/111) in the wide QRS group had hyperkalemia, which was significantly higher than the 26.7% (n = 135/506) in the narrow QRS group (P < 0.001). In multivariable logistic regression analysis, wide QRS PEA was significantly associated with hyperkalemia (odds ratio = 2.86, 95% confidence interval: 1.80-4.53, P < 0.001). CONCLUSIONS: Wide QRS PEA as an initial cardiac rhythm was significantly associated with hyperkalemia in cardiac arrest patients.
Assuntos
Hiperpotassemia/diagnóstico , Parada Cardíaca Extra-Hospitalar/diagnóstico , Idoso , Eletrocardiografia , Serviço Hospitalar de Emergência , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , República da Coreia , Estudos RetrospectivosRESUMO
OBJECTIVE: We aimed to describe the clinical manifestations of patients with sepsis who had the hollow adrenal gland sign (HAGS) during the acute phase of resuscitation and evaluated its value in predicting in-hospital mortality. METHODS: We performed a single-center, retrospective study of patients with sepsis who visited the emergency department (ED) from November 2015 to December 2018. The patients were categorized into the positive HAGS (pHAGS) and negative HAGS (nHAGS) groups, based on its presence in initial dual-phase contrast-enhanced abdominal computed tomography (CT). The primary outcome was in-hospital mortality. A multiple logistic regression model was developed to assess variables related to in-hospital mortality. RESULTS: In all, 156 patients were included, and 36.5% (nâ¯=â¯57) was assigned to the pHAGS group. Both the maximal Sequential Organ Failure Assessment score within 24â¯h after ED arrival (10, interquartile range [IQR] 7-13 vs. 8, IQR 6-10, pâ¯<â¯0.01) and APACHE II score (24, IQR 20-31 vs. 20, IQR 17-25, pâ¯<â¯0.01) were significantly higher in the pHAGS than in the nHAGS group; the former group received significantly more interventions including vasopressors, renal replacement therapy, mechanical ventilation, and transfusions; in-hospital mortality was significantly higher in the former than in the latter group (29.8% vs. 10.1%, pâ¯<â¯0.01). pHAGS was an independent predictor of in-hospital mortality (adjusted odds ratio, 2.89; 95% confidence interval, 1.08-7.78; pâ¯=â¯0.04). CONCLUSIONS: Patients with sepsis who showed the HAGS had more severe illness than those who did not, and had an increased need for organ-supportive interventions. Presence of the HAGS was independently associated with in-hospital mortality.
Assuntos
Glândulas Suprarrenais/diagnóstico por imagem , Mortalidade Hospitalar , Sepse/diagnóstico por imagem , APACHE , Idoso , Transfusão de Sangue , Meios de Contraste , Estado Terminal , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Tomografia Computadorizada Multidetectores , Escores de Disfunção Orgânica , Prognóstico , Terapia de Substituição Renal , Respiração Artificial , Estudos Retrospectivos , Sepse/terapia , Índice de Gravidade de Doença , Choque Séptico/diagnóstico por imagem , Choque Séptico/terapia , Vasoconstritores/uso terapêuticoRESUMO
Background and Objectives: This retrospective study evaluated the clinical impact of enhanced personal protective equipment (PPE) on the clinical outcomes in patients with out-of-hospital cardiac arrest. Moreover, by focusing on the use of a powered air-purifying respirator (PAPR), we investigated the medical personnel's perceptions of wearing PAPR during cardiopulmonary resuscitation. Materials and Methods: According to the arrival time at the emergency department, the patients were categorized into a conventional PPE group (1 August 2019 to 20 January 2020) and an enhanced PPE group (21 January 2020, to 31 August 2020). The primary outcomes of this analysis were the return of spontaneous circulation (ROSC) rate. Additionally, subjective perception of the medical staff regarding the effect of wearing enhanced PPE during cardiopulmonary resuscitation (CPR) was evaluated by conducting a survey. Results: This study included 130 out-of-hospital cardiac arrest (OHCA) patients, with 73 and 57 patients in the conventional and enhanced PPE groups, respectively. The median time intervals to first intubation and to report the first arterial blood gas analysis results were longer in the enhanced PPE group than in the conventional PPE group (3 min vs. 2 min; p = 0.020 and 8 min vs. 3 min; p < 0.001, respectively). However, there were no significant differences in the ROSC rate (odds ratio (OR) = 0.79, 95% confidence interval (CI): 0.38-1.67; p = 0.542) and 1 month survival (OR 0.38, 95% CI: 0.07-2.10; p = 0.266) between the two groups. In total, 67 emergent department (ED) professionals responded to the questionnaire. Although a significant number of respondents experienced inconveniences with PAPR use, they agreed that PAPR was necessary during the CPR procedure for protection and reduction of infection transmission. Conclusion: The use of enhanced PPE, including PAPR, affected the performance of CPR to some extent but did not alter patient outcomes. PAPR use during the resuscitation of OHCA patients might positively impact the psychological stability of the medical staff.
Assuntos
Coronavirus , Parada Cardíaca Extra-Hospitalar , Humanos , Parada Cardíaca Extra-Hospitalar/terapia , Pandemias , Equipamento de Proteção Individual , Estudos RetrospectivosRESUMO
Background and objectives: We aimed to compare the accuracy of positive quick sequential organ failure assessment (qSOFA) scores and the RED sign in predicting critical care requirements (CCRs) in patients with suspected infection who presented to the emergency department (ED). Materials and Methods: In this retrospective observational study, we examined adult patients with suspected infection in the ED from June 2018 to September 2018. A positive qSOFA (qSOFA+) was defined as the presence of ≥2 of the following criteria: altered mental status (AMS), systolic blood pressure (SBP) < 100 mmHg, and respiratory rate (RR) ≥ 22 breaths/min. A positive RED sign (RED sign+) was defined as the presence of at least one of the RED sign criteria: AMS, skin mottling, SBP < 90 mmHg, heart rate >130 beats/min, or RR > 30 breaths/min. A qSOFA/RED+ was defined as the presence of qSOFA+ or RED+. We applied these tools twice using the initial values upon ED arrival and all values within 2 h after ED arrival. The accuracy of qSOFA+, RED+, and qSOFA/RED+ in predicting CCR was assessed. Results: Data from 5353 patients with suspected infection were analyzed. The area under the receiver operating characteristic curve (AUC) of RED+ (0.67, 95% confidence interval [CI]: 0.65-0.70) and that of qSOFA/RED+ (0.68, 95% CI: 0.66-0.70, p < 0.01) were higher than the AUC of qSOFA+ (0.59, 95% CI: 0.57-0.60) in predicting CCR on ED arrival. The qSOFA/RED+ within 2 h showed the highest accuracy (AUC 0.72, 95% CI: 0.70-0.75, p < 0.001). Conclusions: The accuracy of the RED sign in predicting CCR in patients with suspected infection who presented at ED was better than that of qSOFA. The combined use of the RED sign and qSOFA (positive qSOFA or RED sign) showed the highest accuracy.
Assuntos
Cuidados Críticos/estatística & dados numéricos , Escores de Disfunção Orgânica , Sepse/diagnóstico , Avaliação de Sintomas/estatística & dados numéricos , Idoso , Área Sob a Curva , Serviço Hospitalar de Emergência , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Curva ROC , Estudos Retrospectivos , Medição de Risco , Avaliação de Sintomas/métodosRESUMO
Background and objectives: To compare the first pass success (FPS) rate of the C-MAC video laryngoscope (C-MAC) and conventional Macintosh-type direct laryngoscopy (DL) during cardiopulmonary resuscitation (CPR) in the emergency department (ED). Materials and Methods: This study was a single-center, retrospective study conducted from April 2014 to July 2018. Patients were categorized into either the C-MAC or DL group, according to the device used on the first endotracheal intubation (ETI) attempt. The primary outcome was the FPS rate. A multiple logistic regression model was developed to identify factors related to the FPS. Results: A total of 573 ETIs were performed. Of the eligible cases, 263 and 310 patients were assigned to the C-MAC and DL group, respectively. The overall FPS rate was 75% (n = 431/573). The FPS rate was higher in the C-MAC group than in the DL group, but there was no statistically significant difference (total n = 431, 79% compared to 72%, p = 0.075). In the multiple logistic regression analysis, the C-MAC use had higher FPS rate (adjusted odds ratio: 1.80; 95% CI, 1.17-2.77; p = 0.007) than that of the DL use. Conclusions: The C-MAC use on the first ETI attempt during cardiopulmonary resuscitation in the emergency department had a higher FPS rate than that of the DL use.
Assuntos
Intubação Intratraqueal/instrumentação , Laringoscópios/normas , Ressuscitação/instrumentação , Idoso , Manuseio das Vias Aéreas , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Humanos , Intubação Intratraqueal/métodos , Laringoscópios/efeitos adversos , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde/normas , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Ressuscitação/métodos , Estudos RetrospectivosRESUMO
Background: Lactate is a commonly used biomarker for sepsis, although it has limitations in certain cases, suggesting the need for novel biomarkers. We evaluated the diagnostic accuracy of plasma renin concentration and renin activity for mortality and kidney outcomes in patients with sepsis with hypoperfusion or hypotension. Methods: This was a multicenter, prospective, observational study of 117 patients with septic shock treated at three tertiary emergency departments between September 2021 and October 2022. The accuracy of renin activity, renin, and lactate concentrations in predicting 28-day mortality, acute kidney injury (AKI), and renal replacement requirement was assessed using the area under the ROC curve (AUC) analysis. Results: The AUCs of initial renin activity, renin, and lactate concentrations for predicting 28-day mortality were 0.66 (95% confidence interval [CI], 0.55-0.77), 0.63 (95% CI, 0.52-0.75), and 0.65 (95% CI, 0.53-0.77), respectively, and those at 24 hrs were 0.74 (95% CI, 0.62-0.86), 0.70 (95% CI, 0.56-0.83), and 0.67 (95% CI, 0.54-0.79). Renin concentrations and renin activity outperformed initial lactate concentrations in predicting AKI within 14 days. The AUCs of renin and lactate concentrations were 0.71 (95% CI, 0.61-0.80) and 0.57 (95% CI, 0.46-0.67), respectively (P=0.030). The AUC of renin activity (0.70; 95% CI, 0.60-0.80) was also higher than that of lactate concentration (P=0.044). Conclusions: Renin concentration and renin activity show comparable performance to lactate concentration in predicting 28-day mortality in patients with septic shock but superior performance in predicting AKI.
Assuntos
Injúria Renal Aguda , Área Sob a Curva , Biomarcadores , Hipotensão , Ácido Láctico , Curva ROC , Renina , Choque Séptico , Humanos , Renina/sangue , Choque Séptico/mortalidade , Choque Séptico/sangue , Choque Séptico/diagnóstico , Choque Séptico/complicações , Estudos Prospectivos , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/mortalidade , Injúria Renal Aguda/sangue , Hipotensão/diagnóstico , Hipotensão/sangue , Hipotensão/complicações , Hipotensão/mortalidade , Biomarcadores/sangue , Ácido Láctico/sangueRESUMO
Aim: We assessed the efficacy of anti-hyperkalemic agents for alleviating hyperkalemia and improving clinical outcomes in patients with out-of-hospital cardiac arrest (OHCA). Methods: This was a single-center, retrospective observational study of OHCA patients treated at tertiary hospitals between 2010 and 2020. Adult patients aged 18 or older who were in cardiac arrest at the time of arrival and had records of potassium levels measured during cardiac arrest were included. A linear regression model was used to evaluate the relationship between changes in potassium levels and use of anti-hyperkalemic medications. Cox proportional hazards regression analysis was performed to analyze the relationship between the use of anti-hyperkalemic agents and the achievement of return of spontaneous circulation (ROSC). Results: Among 839 episodes, 465 patients received anti-hyperkalemic medication before ROSC. The rate of ROSC was higher in the no anti-hyperkalemic group than in the anti-hyperkalemic group (55.9 % vs 47.7 %, P = 0.019). The decrease in potassium level in the anti-hyperkalemic group from pre-ROSC to post-ROSC was significantly greater than that in the no anti-hyperkalemic group (coefficient 0.38, 95 % confidence interval [CI], 0.13-0.64, P = 0.003). In Cox proportional hazards regression analysis, the use of anti-hyperkalemic medication was related to a decreased ROSC rate in the overall group (adjusted hazard ratio [aHR] 0.66, 95 % CI, 0.54-0.81, P < 0.001), but there were no differences among subgroups classified according to initial potassium levels. Conclusions: Anti-hyperkalemic agents were associated with substantial decreases in potassium levels in OHCA patients. However, administration of anti-hyperkalemic agents did not affect the achievement of ROSC.
RESUMO
As point-of-care ultrasound (POCUS) is increasingly being used in clinical settings, ultrasound education is expanding into student curricula. We aimed to determine the status and awareness of POCUS education in Korean medical schools using a nationwide cross-sectional survey. In October 2021, a survey questionnaire consisting of 20 questions was distributed via e-mail to professors in the emergency medicine (EM) departments of Korean medical schools. The questionnaire encompassed 19 multiple-choice questions covering demographics, current education, perceptions, and barriers, and the final question was an open-ended inquiry seeking suggestions for POCUS education. All EM departments of the 40 medical schools responded, of which only 13 (33%) reported providing POCUS education. The implementation of POCUS education primarily occurred in the third and fourth years, with less than 4 hours of dedicated training time. Five schools offered a hands-on education. Among schools offering ultrasound education, POCUS training for trauma cases is the most common. Eight schools had designated professors responsible for POCUS education and only 2 possessed educational ultrasound devices. Of the respondents, 64% expressed the belief that POCUS education for medical students is necessary, whereas 36%, including those with neutral opinions, did not anticipate its importance. The identified barriers to POCUS education included faculty shortages (83%), infrastructure limitations (76%), training time constraints (74%), and a limited awareness of POCUS (29%). POCUS education in Korean medical schools was limited to a minority of EM departments (33%). To successfully implement POCUS education in medical curricula, it is crucial to clarify learning objectives, enhance faculty recognition, and improve the infrastructure. These findings provide valuable insights for advancing ultrasound training in medical schools to ensure the provision of high-quality POCUS education for future healthcare professionals.
Assuntos
Currículo , Sistemas Automatizados de Assistência Junto ao Leito , Faculdades de Medicina , Ultrassonografia , Estudos Transversais , Humanos , República da Coreia , Ultrassonografia/estatística & dados numéricos , Inquéritos e Questionários , Medicina de Emergência/educaçãoRESUMO
We investigated the prognostic performance of scoring systems by the intensive care unit (ICU) type. This was a retrospective observational study using data from the Marketplace for Medical Information in the Intensive Care IV database. The primary outcome was in-hospital mortality. We obtained Sequential Organ Failure Assessment (SOFA), Acute Physiology and Chronic Health Evaluation (APACHE) III, and Simplified Acute Physiology Score (SAPS) II scores in each ICU type. Prognostic performance was evaluated with the area under the receiver operating characteristic curve (AUROC) and was compared among ICU types. A total of 29,618 patients were analyzed, and the in-hospital mortality was 12.4%. The overall prognostic performance of APACHE III was significantly higher than those of SOFA and SAPS II (0.807, [95% confidence interval, 0.799-0.814], 0.785 [0.773-0.797], and 0.795 [0.787-0.811], respectively). The prognostic performance of SOFA, APACHE III, and SAPS II scores was significantly different between ICU types. The AUROC ranges of SOFA, APACHE III, and SAPS II were 0.723-0.826, 0.728-0.860, and 0.759-0.819, respectively. The neurosurgical and surgical ICUs had lower prognostic performance than other ICU types. The prognostic performance of scoring systems in patients with suspected infection is significantly different according to ICU type. APACHE III systems have the highest prediction performance. ICU type may be a significant factor in the prognostication.
RESUMO
ABSTRACT: Objective/Introduction : Sequential vital-sign information and trends in vital signs are useful for predicting changes in patient state. This study aims to predict latent shock by observing sequential changes in patient vital signs. Methods : The dataset for this retrospective study contained a total of 93,194 emergency department (ED) visits from January 1, 2016, and December 31, 2020, and Medical Information Mart for Intensive Care (MIMIC)-IV-ED data. We further divided the data into training and validation datasets by random sampling without replacement at a 7:3 ratio. We carried out external validation with MIMIC-IV-ED. Our prediction model included logistic regression (LR), random forest (RF) classifier, a multilayer perceptron (MLP), and a recurrent neural network (RNN). To analyze the model performance, we used area under the receiver operating characteristic curve (AUROC). Results : Data of 89,250 visits of patients who met prespecified criteria were used to develop a latent-shock prediction model. Data of 142,250 patient visits from MIMIC-IV-ED satisfying the same inclusion criteria were used for external validation of the prediction model. The AUROC values of prediction for latent shock were 0.822, 0.841, 0.852, and 0.830 with RNN, MLP, RF, and LR methods, respectively, at 3 h before latent shock. This is higher than the shock index or adjusted shock index. Conclusion : We developed a latent shock prediction model based on 24 h of vital-sign sequence that changed with time and predicted the results by individual.
Assuntos
Choque , Humanos , Estudos Retrospectivos , Choque/diagnóstico , Serviço Hospitalar de Emergência , Sinais Vitais , Curva ROCRESUMO
Background: We compared the prognostic accuracy of in-hospital mortality of the initial Sequential Organ Failure Assessment (SOFAini) score at the time of sepsis recognition and resuscitation and the maximum SOFA score (SOFAmax) using the worst variables in the 24 h after the initial score measurement in emergency department (ED) patients with septic shock. Methods: This was a retrospective observational study using a multicenter prospective registry of septic shock patients in the ED between October 2015 and December 2019. The primary outcome was in-hospital mortality. The prognostic accuracies of SOFAini and SOFAmax were evaluated using the area under the receiver operating characteristic (AUC) curve. Results: A total of 4860 patients was included, and the in-hospital mortality was 22.1%. In 59.7% of patients, SOFAmax increased compared with SOFAini, and the mean change of total SOFA score was 2.0 (standard deviation, 2.3). There was a significant difference in in-hospital mortality according to total SOFA score and the SOFA component scores, except cardiovascular SOFA score. The AUC of SOFAmax (0.71; 95% confidence interval [CI], 0.69-0.72) was significantly higher than that of SOFAini (AUC, 0.67; 95% CI, 0.66-0.69) in predicting in-hospital mortality. The AUCs of all scores of the six components were higher for the maximum values. Conclusion: The prognostic accuracy of the initial SOFA score at the time of sepsis recognition was lower than the 24-h maximal SOFA score in ED patients with septic shock. Follow-up assessments of organ failure may improve discrimination of the SOFA score for predicting mortality.
RESUMO
We sought to determine whether blade size influences the first-pass success (FPS) rate when performing endotracheal intubation (ETI) with a C-MAC video laryngoscope (VL) in emergency department (ED) patients. This single-center, retrospective, observational study was conducted between August 2016 and July 2022. A total of 1467 patients was divided into two categories based on the blade size used during the first ETI attempt: blade-3 (n = 365) and blade-4 groups (n = 1102). The primary outcome was the FPS rate. The secondary outcomes included the glottic view, multiple attempt rate, and ETI-related complications. We used propensity score matching to reduce the potential confounders between the two groups. Among these, 363 pairs of matched propensity scores were generated. The FPS rate did not differ between the blade-3 (84.8%) and blade-4 groups (87.3%) in the matched cohort (p = 0.335). The multiple attempt rate did not differ significantly between groups (p = 0.289) and was 3.9% and 2.5% in the blade-3 and blade-4 groups, respectively. The difficult glottic view (11.3 vs. 6.9%, p = 0.039) and complication rates (15.4% vs. 10.5%, p = 0.047) were significantly higher in the blade-3 group than in the blade-4 group. The FPS rates of ETI with the blade-3 and blade-4 groups in adult patients in the ED did not differ significantly.
RESUMO
Introduction: In this study we aimed to investigate the prognostic accuracy for predicting in-hospital mortality using respiratory Sequential Organ Failure Assessment (SOFA) scores by the conventional method of missing-value imputation with normal partial pressure of oxygen (PaO2)- and oxygen saturation (SpO2)-based estimation methods. Methods: This was a single-center, retrospective cohort study of patients with suspected infection in the emergency department. The primary outcome was in-hospital mortality. We compared the area under the receiver operating characteristics curve (AUROC) and calibration results of the conventional method (normal value imputation for missing PaO2) and six SpO2-based methods: using methods A, B, PaO2 is estimated by dividing SpO2 by a scale; with methods C and D, PaO2 was estimated by a mathematical model from a previous study; with methods E, F, respiratory SOFA scores was estimated by SpO2 thresholds and respiratory support use; with methods A, C, E are SpO2-based estimation for all PaO2 values, while methods B, D, F use such estimation only for missing PaO2 values. Results: Among the 15,119 patients included in the study, the in-hospital mortality rate was 4.9%. The missing PaO2was 56.0%. The calibration plots were similar among all methods. Each method yielded AUROCs that ranged from 0.735-0.772. The AUROC for the conventional method was 0.755 (95% confidence interval [CI] 0.736-0.773). The AUROC for method C (0.772; 95% CI 0.754-0.790) was higher than that of the conventional method, which was an SpO2-based estimation for all PaO2 values. The AUROC for total SOFA score from method E (0.815; 95% CI 0.800-0.831) was higher than that from the conventional method (0.806; 95% CI 0.790-0.822), in which respiratory SOFA was calculated by the predefined SpO2 cut-offs and oxygen support. Conclusion: In non-ICU settings, respiratory SOFA scores estimated by SpO2 might have acceptable prognostic accuracy for predicting in-hospital mortality. Our results suggest that SpO2-based respiratory SOFA score calculation might be an alternative for evaluating respiratory organ failure in the ED and clinical research settings.
Assuntos
Escores de Disfunção Orgânica , Insuficiência Respiratória , Humanos , Mortalidade Hospitalar , Estudos Retrospectivos , Prognóstico , Oxigênio , Insuficiência Respiratória/diagnóstico , Unidades de Terapia IntensivaRESUMO
Bacteremia is a life-threatening condition that has increased in prevalence over the past two decades. Prompt recognition of bacteremia is important; however, identification of bacteremia requires 1 to 2 days. This retrospective cohort study, conducted from 10 November 2014 to November 2019, among patients with suspected infection who visited the emergency department (ED), aimed to develop and validate a simple tool for predicting bacteremia. The study population was randomly divided into derivation and development cohorts. Predictors of bacteremia based on the literature and logistic regression were assessed. A weighted value was assigned to predictors to develop a prediction model for bacteremia using the derivation cohort; discrimination was then assessed using the area under the receiver operating characteristic curve (AUC). Among the 22,519 patients enrolled, 18,015 were assigned to the derivation group and 4504 to the validation group. Sixteen candidate variables were selected, and all sixteen were used as significant predictors of bacteremia (model 1). Among the sixteen variables, the top five with higher odds ratio, including procalcitonin, neutrophil-lymphocyte ratio (NLR), lactate level, platelet count, and body temperature, were used for the simple bacteremia score (model 2). The proportion of bacteremia increased according to the simple bacteremia score in both cohorts. The AUC for model 1 was 0.805 (95% confidence interval [CI] 0.785-0.824) and model 2 was 0.791 (95% CI 0.772-0.810). The simple bacteremia prediction score using only five variables demonstrated a comparable performance with the model including sixteen variables using all laboratory results and vital signs. This simple score is useful for predicting bacteremia-assisted clinical decisions.
RESUMO
The prognostic value of low vitamin C levels has not been well investigated in patients with septic shock. We aimed to evaluate the association of vitamin C deficiency with mortality in patients with septic shock. We conducted a retrospective analysis of 165 patients with septic shock from a prospective multicenter trial and institutional sepsis registry between April 2018 and January 2020. The primary outcome was 28-day mortality. The patients were categorized into vitamin C deficiency and normal groups based on a vitamin C cutoff level of 11.4 mmol/L. Multivariable Cox regression analysis was performed to examine the association between vitamin C levels and 28-day mortality. A total of 165 patients was included for analysis and 77 (46.7%) had vitamin C deficiency. There was no significant difference in the 28-day mortality rate between the vitamin C deficiency group and the normal group (23.4% (n = 18/77) vs. 13.6% (n = 12/88), p = 0.083). Multivariable Cox proportional hazard analysis showed vitamin C deficiency to be associated with increased risk of 28-day mortality (adjusted hazard ratio, 2.65, 95% confidence interval (CI), 1.08-6.45; p = 0.032). Initial vitamin C deficiency was associated with a higher risk of 28-day mortality in patients with septic shock after adjusting for intravenous administration of vitamin C and thiamine, baseline characteristics, laboratory findings, and severity of illness.
RESUMO
To determine the minimum number of endotracheal intubation (ETI) attempts necessary for a novice emergency medicine (EM) trainee to become proficient with this procedure. This single-center study retrospectively analyzed data obtained from the institutional airway registry during the period from April 2014 to March 2021. All ETI attempts made by EM trainees starting their residency programs between 2014 and 2018 were evaluated. We used a first attempt success (FAS) rate of 85% as a proxy for ETI proficiency. Generalized linear mixed models were used to evaluate the association between FAS and cumulative ETI experience. The number of ETI attempts required to achieve an FAS rate of ≥ 85% was estimated using the regression coefficients obtained from the model. The study period yielded 2077 ETI cases from a total of 1979 patients. The FAS rate was 78.6% (n = 1632/2077). After adjusting for confounding factors, the cumulative number of ETI cases was associated with increased FAS (adjusted odds ratio, 1.010 per additional ETI case; 95% confidence interval 1.006-1.013; p < 0.001). A minimum of 119 ETI cases were required to establish a ≥ 85% likelihood of FAS. At least 119 ETI cases were required for EM trainees to achieve an FAS rate of ≥ 85% in the emergency department.
Assuntos
Competência Clínica , Curva de Aprendizado , Serviço Hospitalar de Emergência , Humanos , Intubação Intratraqueal/métodos , Estudos RetrospectivosRESUMO
OBJECTIVE: We evaluated the performance of diastolic shock index (DSI) and lactate in predicting vasopressor requirement among hypotensive patients with suspected infection in an emergency department. METHODS: This was a single-center, retrospective observational study for adult patients with suspected infection and hypotension in the emergency department from 2018 to 2019. The study population was split into derivation and validation cohorts (70/30). We derived a simple risk score to predict vasopressor requirement using DSI and lactate cutoff values determined by Youden index. We tested the score by the area under the receiver operating characteristic curve (AUC). We performed a multivariable regression analysis to evaluate the association between the timing of vasopressor treatment and 28-day mortality. RESULTS: A total of 1,917 patients were included. We developed a score, assigning 1 point each for the high DSI (≥2.0) and high lactate (≥2.5 mmol/L) criteria. The AUCs of the score were 0.741 (95% confidence interval [CI], 0.715-0.768) at hypotension and 0.736 (95% CI, 0.708-0.763) after initial fluid challenge in the derivation cohort and 0.676 (95% CI, 0.631-0.719) at hypotension and 0.688 (95% CI, 0.642-0.733) after initial fluid challenge in the validation cohort, respectively. In patients with scores of 2 points, early vasopressor therapy initiation was significantly associated with decreased 28-day mortality (adjusted odds ratio, 0.37; 95% CI, 0.14-0.94). CONCLUSION: A prediction model with DSI and lactate levels might be useful to identify patients who are more likely to need vasopressor administration among hypotensive patients with suspected infection.
RESUMO
This study sought to determine whether the C-MAC video laryngoscope (VL) performed better than a direct laryngoscope (DL) when attempting endotracheal intubation (ETI) in the emergency department (ED) while wearing personal protective equipment (PPE). This was a retrospective single-center observational study conducted in an academic ED between February 2020 and March 2022. All emergency medical personnel who participated in any ETI procedure were required to wear PPE. The patients were divided into the C-MAC VL group and the DL group based on the device used during the first ETI attempt. The primary outcome measure was the first-pass success (FPS) rate. A multiple logistic regression was used to determine the factors associated with FPS. Of the 756 eligible patients, 650 were assigned to the C-MAC group and 106 to the DL group. The overall FPS rate was 83.5% (n = 631/756). The C-MAC group had a significantly higher FPS rate than the DL group (85.7% vs. 69.8%, p < 0.001). In the multivariable logistic regression analysis, C-MAC use was significantly associated with an increased FPS rate (adjusted odds ratio, 2.86; 95% confidence interval, 1.69−4.08; p < 0.001). In this study, we found that the FPS rate of ETI was significantly higher when the C-MAC VL was used than when a DL was used by emergency physicians constrained by cumbersome PPE.