RESUMO
BACKGROUND: Evaluating the performance of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) serological assays and clearly articulating the utility of selected antigens, isotypes, and thresholds is crucial to understanding the prevalence of infection within selected communities. METHODS: This cross-sectional study, implemented in 2020, screened PCRconfirmed coronavirus disease 2019 patients (n 86), banked prepandemic and negative samples (n 96), healthcare workers and family members (n 552), and university employees (n 327) for antiSARS-CoV-2 receptor-binding domain, trimeric spike protein, and nucleocapsid protein immunoglobulin (Ig)G and IgA antibodies with a laboratory-developed enzyme-linked immunosorbent assay and tested how antigen, isotype and threshold choices affected the seroprevalence outcomes. The following threshold methods were evaluated: (i) mean 3 standard deviations of the negative controls; (ii) 100 specificity for each antigen-isotype combination; and (iii) the maximal Youden index. RESULTS: We found vastly different seroprevalence estimates depending on selected antigens and isotypes and the applied threshold method, ranging from 0.0 to 85.4. Subsequently, we maximized specificity and reported a seroprevalence, based on more than one antigen, ranging from 9.3 to 25.9. CONCLUSIONS: This study revealed the importance of evaluating serosurvey tools for antigen-, isotype-, and threshold-specific sensitivity and specificity, to interpret qualitative serosurvey outcomes reliably and consistently across studies.
Assuntos
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/epidemiologia , Estudos Soroepidemiológicos , Estudos Transversais , Proteínas do Nucleocapsídeo , Ensaio de Imunoadsorção Enzimática/métodos , Sensibilidade e Especificidade , Imunoglobulina G , Anticorpos Antivirais , Glicoproteína da Espícula de CoronavírusRESUMO
INTRODUCTION: Lactate measurement has been used to identify critical medical illness and initiate early treatment strategies. The prehospital environment offers an opportunity for very early identification of critical illness and commencement of care. HYPOTHESIS: The investigators hypothesized that point-of-care lactate measurement in the prehospital aeromedical environment would: (1) identify medical patients with high mortality; (2) influence fluid, transfusion, and intubation; and (3) increase early central venous catheter (CVC) placement. METHODS: Critically ill, medical, nontrauma patients who were transported from September 2007 through February 2009 by University of Massachusetts (UMass) Memorial LifeFlight, a university-based emergency medical helicopter service, were eligible for enrollment. Patients were prospectively randomized to receive a fingerstick whole-blood lactate measurement on an alternate-day schedule. Flight crews were not blinded to results. Flight crews were asked to inform the receiving attending physician of the results. The primary endpoint was the ability of a high, prehospital lactate value [> 4 millimoles per liter (mmol/L)] to identify mortality. Secondary endpoints included differences in post-transport fluid, transfusion, and intubation, and decrease in time to central venous catheter (CVC) placement. Categorical variables were compared between groups by Fisher's Exact Test, and continuous variables were compared by t-test. RESULTS: Patients (N = 59) were well matched for age, gender, and acuity. In the lactate cohort (n = 20), mean lactate was 7 mmol/L [Standard error of the mean, SEM = 1]. Initial analysis revealed that prehospital lactate levels of ≥ 4 mmol/L did show a trend toward higher mortality with an odds ratio of 2.1 (95% CI, 0.3-13.8). Secondary endpoints did not show a statistically significant change in management between the lactate and non lactate groups. There was a trend toward decreased time to post-transport CVC in the non lactate faction. CONCLUSION: Prehospital aeromedical point-of-care lactate measurement levels ≥ 4 mmol/L may help stratify mortality. Further investigation is needed, as this is a small, limited study. The initial analysis did not find a significant change in post-transport management.
Assuntos
Resgate Aéreo , Estado Terminal , Serviços Médicos de Emergência/organização & administração , Lactatos/análise , Sistemas Automatizados de Assistência Junto ao Leito , Determinação de Ponto Final , Feminino , Mortalidade Hospitalar , Humanos , Masculino , Massachusetts , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos ProspectivosRESUMO
Objective: Accurate measurement of physicians' time spent during patient care stands to inform emergency department (ED) improvement efforts. Direct observation is time consuming and cost prohibitive, so we sought to determine if physician self-estimation of time spent during patient care was accurate. Methods: We performed a prospective, convenience-sample study in which research assistants measured time spent by ED physicians in patient care. At the conclusion of each observed encounter, physicians estimated their time spent. Using Mann-Whitney U tests and Spearman's rho, we compared physician estimates to actual time spent and assessed for associations of encounter characteristics and physician estimation. Results: Among 214 encounters across 10 physicians, we observed a medium-sized correlation between actual and estimated time (Spearman's rho = 0.63, p < 0.001), and in aggregate, physicians underestimated time spent by a median of 0.1 min. An equal number of encounters were overestimated and underestimated. Underestimated encounters were underestimated by a median of 5.1 min (interquartile range [IQR] 2.5-9.8) and overestimated encounters were overestimated by a median of 4.3 min (IQR 2.5-11.6)-26.3% and 27.9% discrepancy, respectively. In terms of actual time spent, underestimated encounters (median 19.3 min, IQR 13.5-28.3) were significantly longer than overestimated encounters (median 15.3 min, IQR 11.3-20.5) (p < 0.001). Conclusions: Physician self-estimation of time spent was accurate in aggregate, providing evidence that it is a valid surrogate marker for larger-scale process improvement and research activities, but likely not at the encounter level. Investigations exploring mechanisms to augment physician self-estimation, including modeling and technological support, may yield pathways to make self-estimation valid also at the encounter level.
RESUMO
OBJECTIVE: To determine whether victim behavior and interaction with triage personnel would conform to expected actions as dictated by the Simple Triage and Rapid Treatment (START) triage methodology, which emphasizes that victims will accept their assigned -triage category. METHODS: In total, 105 volunteers were recruited to complete a 32-question survey after portraying victims in a triage-focused mass casualty incident (MCI) simulation. Questions included sociodemographic characteristics, willingness to follow commands of first responders, and willingness to help first responders. The authors examined whether the outcomes differed by demographics, healthcare experience, or disaster exposure of participants. RESULTS: The survey response rate was 90 percent (95/105). The mean age of participants was 31 years (58 percent women). Half of respondents indicated that they would ask responders to change their triage color if they disagreed with it and 75 percent would ask first responders to change their friend or family members' triage colors. Twenty-one percent of victims reported that they would alter their own triage tag to receive treatment faster and 38 percent would alter a friend or family member's triage color. The youngest (<20 years) and oldest (>40 years) respondents were most likely to act maladaptively. CONCLUSION: Triage algorithms rely upon -victims following the instructions of rescuers. This study suggests that maladaptive behavior by some victims should be anticipated.
Assuntos
Planejamento em Desastres , Socorristas , Incidentes com Feridos em Massa , Adulto , Vítimas de Desastres , Feminino , Humanos , Inquéritos e Questionários , TriagemRESUMO
IntroductionThe most commonly used methods for triage in mass-casualty incidents (MCIs) rely upon providers to take exact counts of vital signs or other patient parameters. The acuity and volume of patients which can be present during an MCI makes this a time-consuming and potentially costly process.HypothesisThis study evaluates and compares the speed of the commonly used Simple Triage and Rapid Treatment (START) triage method with that of an "intuitive triage" method which relies instead upon the abilities of an experienced first responder to determine the triage category of each victim based upon their overall first-impression assessment. The research team hypothesized that intuitive triage would be faster, without loss of accuracy in assigning triage categories. METHODS: Local adult volunteers were recruited for a staged MCI simulation (active-shooter scenario) utilizing local police, Emergency Medical Services (EMS), public services, and government leadership. Using these same volunteers, a cluster randomized simulation was completed comparing START and intuitive triage. Outcomes consisted of the time and accuracy between the two methods. RESULTS: The overall mean speed of the triage process was found to be significantly faster with intuitive triage (72.18 seconds) when compared to START (106.57 seconds). This effect was especially dramatic for Red (94.40 vs 138.83 seconds) and Yellow (55.99 vs 91.43 seconds) patients. There were 17 episodes of disagreement between intuitive triage and START, with no statistical difference in the incidence of over- and under-triage between the two groups in a head-to-head comparison. CONCLUSION: Significant time may be saved using the intuitive triage method. Comparing START and intuitive triage groups, there was a very high degree of agreement between triage categories. More prospective research is needed to validate these results. HartA, NammourE, MangoldsV, BroachJ. Intuitive versus algorithmic triage Prehosp Disaster Med. 2018;33(4):355-361.
Assuntos
Incidentes com Feridos em Massa , Triagem , Listas de Espera , Algoritmos , Simulação por Computador , Planejamento em Desastres , Serviços Médicos de Emergência , Socorristas , HumanosRESUMO
OBJECTIVE: Glycemic control in the critically ill intensive care unit (ICU) patient has been shown to improve morbidity and mortality. We sought to investigate the effect of early glycemic control in critically ill emergency department (ED) patients in a small pilot trial. METHODS: Adult non-trauma, non-pregnant ED patients presenting to a university tertiary referral center and identified as critically ill were eligible for enrollment on a convenience basis. Critical illness was determined upon assignment for ICU admission. Patients were randomized to either ED standard care or glycemic control. Glycemic control involved use of an insulin drip to maintain blood glucose levels between 80-140 mg/dL. Glycemic control continued until ED discharge. Standard patients were managed at ED attending physician discretion. We assessed severity of illness by calculation of APACHE II score. The primary endpoint was in-hospital mortality. Secondary endpoints included vasopressor requirement, hospital length of stay, and mechanical ventilation requirement. RESULTS: Fifty patients were randomized, 24 to the glycemic group and 26 to the standard care cohort. Four of the 24 patients (17%) in the treatment arm did not receive insulin despite protocol requirements. While receiving insulin, three of 24 patients (13%) had an episode of hypoglycemia. By chance, the patients in the treatment group had a trend toward higher acuity by APACHE II scores. Patient mortality and morbidity were similar despite the acuity difference. CONCLUSION: There was no difference in morbidity and mortality between the two groups. The benefit of glycemic control may be subject to source of illness and to degree of glycemic control, or have no effect. Such questions bear future investigation.