Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Am J Emerg Med ; 80: 29-34, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38490096

RESUMO

INTRODUCTION: Chest pain (CP), a common presentation in the emergency department (ED) setting, is associated with significant morbidity and mortality if emergency clinicians miss the diagnosis of acute coronary syndrome (ACS). The HEART (History, Electrocardiogram, Age, Risk Factors, Troponin) score had been validated for risk-stratification patients who are at high risk for ACS and major adverse cardiac events (MACE). However, the use of cocaine as a risk factor of the HEART score was controversial. We hypothesized that patients with cocaine-positive (COP) would not be associated with higher risk of 30-day MACE than cocaine-negative (CON) patients. METHODS: This retrospective study included adult patients who presented to 13 EDs of a University's Medical System between August 7, 2017 to August 19, 2021. Patients who had CP and prospectively calculated HEART scores and urine toxicology tests as part of their clinical evaluation were eligible. Areas Under The Receiver Operating Curve (AUROC) were calculated for the performance of HEART score and 30-day MACE for each group. RESULTS: This study analyzed 46,210 patients' charts, 663 (1.4%) were COP patients. Mean age was statistically similar between groups but there were fewer females in the COP group (26.2% vs 53.2%, p < 0.001). Mean (+/- SD) HEART score was 3.7 (1.4) comparing to 3.1 (1.8, p < 0.001) between COP vs CON groups, respectively. Although more COP patients (54%) had moderate HEART scores (4-6) vs. CON group (35.2%, p < 0.001), rates of 30-day MACE were 1.1% for both groups. HEART score's AUROC was 0.72 for COP and 0.78 for CON groups. AUROC for the Risk Factor among COP patients, which includes cocaine, was poor (0.54). CONCLUSION: This study, which utilized prospective calculated HEART scores, demonstrated that overall performance of the HEART score was reasonable. Specifically, our analysis showed that the rate of 30-day MACE was not affected by cocaine use as a risk factor. We would recommend clinicians to consider the HEART score for this patient group.


Assuntos
Dor no Peito , Transtornos Relacionados ao Uso de Cocaína , Eletrocardiografia , Serviço Hospitalar de Emergência , Humanos , Feminino , Masculino , Estudos Retrospectivos , Dor no Peito/etiologia , Pessoa de Meia-Idade , Medição de Risco/métodos , Adulto , Transtornos Relacionados ao Uso de Cocaína/complicações , Fatores de Risco , Síndrome Coronariana Aguda/diagnóstico , Síndrome Coronariana Aguda/complicações , Curva ROC , Troponina/sangue , Idoso
2.
Intern Emerg Med ; 18(8): 2377-2384, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37491562

RESUMO

Coronavirus disease 2019 (COVID-19) is known to be associated with cardiovascular complications, but whether the current validated HEART score for chest pain is still applicable for these patients is unknown. This study aims to identify the impact and association of COVID-19 co-infection in patients presenting with chest pain and a calculated HEART score to the emergency departments (ED) with 30-day of major adverse cardiac event (MACE). This is a multicenter, retrospective observational study that included adult (age ≥ 18 years) patients visiting 13 different EDs with chest pain and evaluated using a HEART score. The primary outcome was the percentage of 30-day MACE, which included acute myocardial infarction, emergency percutaneous coronary intervention (PCI), coronary artery bypass graft (CABG), or death among patients who presented with chest pain and had COVID-19 co-infection. The sensitivity and specificity of the HEART score among COVID-19 co-infection for MACE were assessed by the receiver operating curve (ROC). We analyzed records of 46,210 eligible patients, in which 327 (0.7%) patients were identified as infected with COVID-19. Patients with COVID-19 had higher mean total HEART score of 3.3 (1.7), compared to patients who did not have COVID-19 (3.1, SD 1.8, P = 0.048). The rate of MACE was similar between both groups. There were only 2 (0.6%) COVID-19 patients who had MACE, compared to 504 (1.1%) patients in control group. Total HEART score was associated with an area under the ROC (AUROC) of 0.99, while the control group's was 0.78. History was associated with high AUROC in both COVID-19 (0.74) and control groups (0.76). Older age in COVID-19 had higher AUROC (0.89) than control patients (0.63). Among patients presenting to the ED with chest pain and having COVID-19 infection, HEART score had predictive capability for MACE, similar to patients without COVID-19 infection. Further studies with more COVID-19 patients are still necessary to confirm our observation.


Assuntos
COVID-19 , Coinfecção , Infarto do Miocárdio , Intervenção Coronária Percutânea , Adulto , Humanos , Adolescente , Medição de Risco , Valor Preditivo dos Testes , COVID-19/complicações , Dor no Peito/etiologia , Serviço Hospitalar de Emergência , Fatores de Risco , Eletrocardiografia
3.
J Am Coll Emerg Physicians Open ; 4(3): e12969, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37304858

RESUMO

Introduction: Cellulitis is commonly diagnosed in emergency departments (EDs), yet roughly one third of ED patients admitted for presumed cellulitis have another, usually benign, condition instead (eg, stasis dermatitis). This suggests there is an opportunity to reduce health care resource use through improved diagnosis at the point of care. This study seeks to test whether a clinical decision support (CDS) tool interoperable with the electronic medical record (EMR) can reduce inappropriate hospital admissions and drive more appropriate and accurate care. Methods: This study was a trial of an EMR-interoperable, image-based CDS tool for evaluation of ED patients with suspected cellulitis. At the point of assigning a provisional diagnosis of cellulitis in the EMR, the clinician was randomly prompted to use the CDS. Based on the patient features entered into the CDS by the clinician, the CDS provided the clinician a list of likely diagnoses. The following were recorded: patient demographics, disposition and final diagnosis of patients, and whether antibiotics were prescribed. Logistic regression methods were used to determine the impact of CDS engagement on our primary outcome of admission for cellulitis, adjusted for patient factors. Antibiotic use was a secondary end point. Results: From September 2019 to February 2020 (or 7 months), the CDS tool was deployed in the EMR at 4 major hospitals in the University of Maryland Medical System. There were 1269 encounters for cellulitis during the study period. The engagement with the CDS was low (24.1%, 95/394), but engagement was associated with an absolute reduction in admissions (7.1%, p = 0.03). After adjusting for age greater than 65 years, female sex, non-White race, and private insurance, CDS engagement was associated with a significant reduction of admissions (adjusted OR = 0.62, 95% confidence interval (CI): 0.40-0.97, p = 0.04) and antibiotic use (Adjusted OR = 0.63, 95% CI: 0.40-0.99, p = 0.04). Conclusions: CDS engagement was associated with decreased admissions for cellulitis and decreased antibiotic use in this study, despite low levels of CDS engagement. Further research should examine the impact of CDS engagement in other practice environments and measure longer-term outcomes in patients discharged from the ED.

4.
J Emerg Med ; 61(2): 189-197, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34006422

RESUMO

BACKGROUND: Training programs for resident physicians struggle to balance the need for clinical experience with the impact of fatigue on patient safety. The length of shifts worked by emergency medicine (EM) residents is likely an important determinant of resident fatigue. OBJECTIVE: Assess the impact of a longer clinical shift on procedural competency. METHODS: We conducted a retrospective chart review of arterial line placements, central venous catheterizations, tube thoracostomies, endotracheal intubations, and lumbar punctures performed by EM residents working 12-h shifts in the emergency department of an academic medical center over an academic year. We compared complication rates between procedures performed in the first 8 vs. the last 4 h of a 12-h shift. Procedures without complication were defined as successful on first-pass attempt and without a downstream mechanical or medical complication. Multivariable modified Poisson regression was used to simultaneously control for possible confounders affecting procedure success. RESULTS: We identified 548 eligible procedures: 307 performed in the first 8 h of a 12-h shift and 241 in the last 4 h. The complication rate across all procedures was higher in the last 4 h of the shift (pooled risk ratio 1.41, 95% confidence interval 1.18-1.67). This effect persisted when adjusting for potential confounders (adjusted risk ratio 1.42, 95% confidence interval 1.19-1.69). CONCLUSION: Overall, complication rates of included procedures performed by EM residents were higher during the last 4 vs. first 8 h of a 12-h shift. Training programs should consider the impact of resident fatigue on patient safety when making work schedules.


Assuntos
Medicina de Emergência , Internato e Residência , Medicina de Emergência/educação , Serviço Hospitalar de Emergência , Humanos , Admissão e Escalonamento de Pessoal , Estudos Retrospectivos
5.
Am J Infect Control ; 49(3): 319-326, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33640109

RESUMO

BACKGROUND: Published bundles to reduce Clostridioides difficile Infection (CDI) frequently lack information on compliance with individual elements. We piloted a computerized clinical decision support-based intervention bundle and conducted detailed evaluation of several intervention-related measures. METHODS: A quasi-experimental study of a bundled intervention was performed at 2 acute care community hospitals in Maryland. The bundle had five components: (1) timely placement in enteric precautions, (2) appropriate CDI testing, (3) reducing proton-pump inhibitor (PPI) use, (4) reducing high-CDI risk antibiotic use, and (5) optimizing use of a sporicidal agent for environmental cleaning. Chi-square and Kruskal-Wallis tests were used to compare measure differences. An interrupted time series analysis was used to evaluate impact on hospital-onset (HO)-CDI. RESULTS: Placement of CDI suspects in enteric precautions before test results did not change. Only hospital B decreased the frequency of CDI testing and reduced inappropriate testing related to laxative use. Both hospitals reduced the use of PPI and high-risk antibiotics. A 75% decrease in HO-CDI immediately postimplementation was observed for hospital B only. CONCLUSION: A CDI reduction bundle showed variable impact on relevant measures. Hospital-specific differential uptake of bundle elements may explain differences in effectiveness, and emphasizes the importance of measuring processes and intermediate outcomes.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecção Hospitalar , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/prevenção & controle , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/prevenção & controle , Hospitais , Humanos , Maryland
6.
Clin Pharmacol Ther ; 110(1): 179-188, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33428770

RESUMO

The value of utilizing a multigene pharmacogenetic panel to tailor pharmacotherapy is contingent on the prevalence of prescribed medications with an actionable pharmacogenetic association. The Clinical Pharmacogenetics Implementation Consortium (CPIC) has categorized over 35 gene-drug pairs as "level A," for which there is sufficiently strong evidence to recommend that genetic information be used to guide drug prescribing. The opportunity to use genetic information to tailor pharmacotherapy among adult patients was determined by elucidating the exposure to CPIC level A drugs among 11 Implementing Genomics In Practice Network (IGNITE)-affiliated health systems across the US. Inpatient and/or outpatient electronic-prescribing data were collected between January 1, 2011 and December 31, 2016 for patients ≥ 18 years of age who had at least one medical encounter that was eligible for drug prescribing in a calendar year. A median of ~ 7.2 million adult patients was available for assessment of drug prescribing per year. From 2011 to 2016, the annual estimated prevalence of exposure to at least one CPIC level A drug prescribed to unique patients ranged between 15,719 (95% confidence interval (CI): 15,658-15,781) in 2011 to 17,335 (CI: 17,283-17,386) in 2016 per 100,000 patients. The estimated annual exposure to at least 2 drugs was above 7,200 per 100,000 patients in most years of the study, reaching an apex of 7,660 (CI: 7,632-7,687) per 100,000 patients in 2014. An estimated 4,748 per 100,000 prescribing events were potentially eligible for a genotype-guided intervention. Results from this study show that a significant portion of adults treated at medical institutions across the United States is exposed to medications for which genetic information, if available, should be used to guide prescribing.


Assuntos
Prescrições de Medicamentos/estatística & dados numéricos , Genótipo , Farmacogenética , Testes Farmacogenômicos , Adulto , Idoso , Prescrição Eletrônica/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estados Unidos
7.
JAMA Netw Open ; 3(12): e2029411, 2020 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-33315113

RESUMO

Importance: Genotype-guided prescribing in pediatrics could prevent adverse drug reactions and improve therapeutic response. Clinical pharmacogenetic implementation guidelines are available for many medications commonly prescribed to children. Frequencies of medication prescription and actionable genotypes (genotypes where a prescribing change may be indicated) inform the potential value of pharmacogenetic implementation. Objective: To assess potential opportunities for genotype-guided prescribing in pediatric populations among multiple health systems by examining the prevalence of prescriptions for each drug with the highest level of evidence (Clinical Pharmacogenetics Implementation Consortium level A) and estimating the prevalence of potential actionable prescribing decisions. Design, Setting, and Participants: This serial cross-sectional study of prescribing prevalences in 16 health systems included electronic health records data from pediatric inpatient and outpatient encounters from January 1, 2011, to December 31, 2017. The health systems included academic medical centers with free-standing children's hospitals and community hospitals that were part of an adult health care system. Participants included approximately 2.9 million patients younger than 21 years observed per year. Data were analyzed from June 5, 2018, to April 14, 2020. Exposures: Prescription of 38 level A medications based on electronic health records. Main Outcomes and Measures: Annual prevalence of level A medication prescribing and estimated actionable exposures, calculated by combining estimated site-year prevalences across sites with each site weighted equally. Results: Data from approximately 2.9 million pediatric patients (median age, 8 [interquartile range, 2-16] years; 50.7% female, 62.3% White) were analyzed for a typical calendar year. The annual prescribing prevalence of at least 1 level A drug ranged from 7987 to 10 629 per 100 000 patients with increasing trends from 2011 to 2014. The most prescribed level A drug was the antiemetic ondansetron (annual prevalence of exposure, 8107 [95% CI, 8077-8137] per 100 000 children). Among commonly prescribed opioids, annual prevalence per 100 000 patients was 295 (95% CI, 273-317) for tramadol, 571 (95% CI, 557-586) for codeine, and 2116 (95% CI, 2097-2135) for oxycodone. The antidepressants citalopram, escitalopram, and amitriptyline were also commonly prescribed (annual prevalence, approximately 250 per 100 000 patients for each). Estimated prevalences of actionable exposures were highest for oxycodone and ondansetron (>300 per 100 000 patients annually). CYP2D6 and CYP2C19 substrates were more frequently prescribed than medications influenced by other genes. Conclusions and Relevance: These findings suggest that opportunities for pharmacogenetic implementation among pediatric patients in the US are abundant. As expected, the greatest opportunity exists with implementing CYP2D6 and CYP2C19 pharmacogenetic guidance for commonly prescribed antiemetics, analgesics, and antidepressants.


Assuntos
Serviços de Saúde da Criança , Cálculos da Dosagem de Medicamento , Testes Farmacogenômicos , Padrões de Prática Médica , Medicamentos sob Prescrição , Criança , Serviços de Saúde da Criança/normas , Serviços de Saúde da Criança/estatística & dados numéricos , Estudos Transversais , Citocromo P-450 CYP2C19/genética , Citocromo P-450 CYP2D6/genética , Registros Eletrônicos de Saúde/estatística & dados numéricos , Feminino , Perfil Genético , Humanos , Masculino , Pediatria/métodos , Pediatria/normas , Testes Farmacogenômicos/métodos , Testes Farmacogenômicos/estatística & dados numéricos , Padrões de Prática Médica/normas , Padrões de Prática Médica/estatística & dados numéricos , Medicina de Precisão/métodos , Medicamentos sob Prescrição/classificação , Medicamentos sob Prescrição/uso terapêutico , Estados Unidos
8.
J Emerg Med ; 58(6): 882-891, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32370928

RESUMO

BACKGROUND: Decompensation on the medical floor is associated with increased in-hospital mortality. OBJECTIVE: Our aim was to determine the accuracy of the National Early Warning Score (NEWS) in predicting early, unplanned escalation of care in patients admitted to the hospital from the emergency department (ED) compared to the Shock Index (SI) and the quick Sepsis-Related Organ Failure Assessment (qSOFA) score. METHODS: We conducted a retrospective cohort study of patients admitted directly from the ED to monitored or unmonitored beds (November 9, 2015 to April 30, 2018) in 3 hospitals. Interhospital transfers were excluded. Patient data, vital status, and bed assignment were extracted from the electronic medical record. Scores were calculated using the last set of vital signs prior to leaving the ED. Primary endpoint was in-hospital death or placement in an intermediate or intensive care unit within 24 h of admission from the ED. Scores were compared using the area under the receiver operating curve (AUROC). RESULTS: Of 46,018 ED admissions during the study window, 39,491 (85.8%) had complete data, of which 3.7% underwent escalation in level of care within 24 h of admission. NEWS outperformed (AUROC 0.69; 95% confidence interval [CI] 0.68-0.69) qSOFA (AUROC 0.63; 95% CI 0.62-0.63; p < 0.001) and SI (AUROC 0.60; 95% CI 0.60-0.61; p < 0.001) at predicting unplanned escalations or death at 24 h. CONCLUSIONS: This multicenter study found NEWS was superior to the qSOFA score and SI in predicting early, unplanned escalation of care for ED patients admitted to a general medical-surgical floor.


Assuntos
Escore de Alerta Precoce , Sepse , Serviço Hospitalar de Emergência , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Escores de Disfunção Orgânica , Prognóstico , Curva ROC , Estudos Retrospectivos
9.
J Crit Care ; 57: 246-252, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31911086

RESUMO

PURPOSE: To measure how an integrated smartlist developed for critically ill patients would change intensive care units (ICUs) length of stay (LOS), mortality, and charges. MATERIALS AND METHODS: Propensity-score analysis of adult patients admitted to one of 14 surgical and medical ICUs between June 2017 and May 2018. The smart list aimed to certain preventative measures for all critical patients (e.g., removing unneeded catheters, starting thromboembolic prophylaxis, etc.) and was integrated into the electronic health record workflows at the hospitals under study. RESULTS: During the study period, 11,979 patients were treated in the 14 participating ICUs by 518 unique providers. Patients who had the smart list used during ≥60% of their ICU stay (N = 432 patients, 3.6%) were significantly more likely to have a shorter ICU LOS (HR = 1.20, 95% CI:1.0 to 1.4, p = 0.015) with an average decrease of -$1218 (95% CI: -$1830 to -$607, P < 0.001) in the amount charged per day. The intervention cohort had fewer average ventilator days (3.05 vent days, SD = 2.55) compared to propensity score matched controls (3.99, SD = 4.68, p = 0.015), but no changes in mortality (16.7% vs 16.0%, p = 0.78). CONCLUSIONS: An integrated smart list shortened LOS and lowered charges in a diverse cohort of critically ill patients.


Assuntos
Lista de Checagem , Estado Terminal/terapia , Registros Eletrônicos de Saúde , Unidades de Terapia Intensiva , Tempo de Internação , Adulto , Idoso , Cateterismo , Estudos de Coortes , Estado Terminal/economia , Feminino , Custos de Cuidados de Saúde , Humanos , Masculino , Informática Médica , Pessoa de Meia-Idade , Pontuação de Propensão , Modelos de Riscos Proporcionais , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade , Software , Interface Usuário-Computador , Ventiladores Mecânicos
10.
Ann Emerg Med ; 75(3): 370-381, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31455571

RESUMO

STUDY OBJECTIVE: In 2014, Maryland launched a population-based payment model that replaced fee-for-service payments with global budgets for all hospital-based services. This global budget revenue program gives hospitals strong incentives to tightly control patient volume and meet budget targets. We examine the effects of the global budget revenue model on rates of admission to the hospital from emergency departments (EDs). METHODS: We used medical record and billing data to examine adult ED encounters from January 1, 2012, to December 31, 2015, in 25 hospital-based EDs, including 10 Maryland global budget revenue hospitals, 10 matched non-Maryland hospitals (primary control), and 5 Maryland Total Patient Revenue hospitals (secondary control). Total Patient Revenue hospitals adopted global budgeting in 2010 under a pilot Maryland program targeting rural hospitals. We conducted difference-in-differences analyses for overall ED admission rates, ED admission rates for ambulatory-care-sensitive conditions and non-ambulatory-care-sensitive conditions, and for clinical conditions that commonly lead to admission. RESULTS: In 3,175,210 ED encounters, the ED admission rate for Maryland global budget revenue hospitals decreased by 0.6% (95% confidence interval -0.8% to -0.4%) compared with that for non-Maryland controls after global budget revenue implementation, a 3.0% relative decline, and decreased by 1.9% (95% confidence interval -2.2% to -1.7%) compared with that for Total Patient Revenue hospitals, a 9.5% relative decline. Relative declines in ED admission rates were similar for ambulatory-care-sensitive-condition and non-ambulatory-care-sensitive-condition encounters. Admission rate declines varied across clinical conditions. CONCLUSION: Implementation of the global budget revenue model led to statistically significant although modest declines in ED admission rates within its first 2 years, with declines in ED admissions most pronounced among certain clinical conditions.


Assuntos
Orçamentos/métodos , Economia Hospitalar/estatística & dados numéricos , Serviço Hospitalar de Emergência/estatística & dados numéricos , Admissão do Paciente/estatística & dados numéricos , Economia Hospitalar/organização & administração , Serviço Hospitalar de Emergência/economia , Feminino , Humanos , Cobertura do Seguro/estatística & dados numéricos , Seguro Saúde/estatística & dados numéricos , Masculino , Maryland/epidemiologia , Pessoa de Meia-Idade , Admissão do Paciente/economia
12.
J Emerg Med ; 52(5): 684-689, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-27955985

RESUMO

BACKGROUND: Computed tomography (CT) is a useful and necessary part of many emergency department (ED) assessments. However, the costs of imaging and the health risks associated with radiation exposure have sparked national efforts to reduce CT ordering in EDs. STUDY OBJECTIVE: We analyzed CT ordering habits prior to and following implementation of a feedback tool at a community hospital. METHODS: In this intervention study, we identified the CT-ordering habits of physicians and mid-level care providers (physician assistants and nurse practitioners) at baseline and after implementation of a system that sent quarterly feedback reports comparing their ordering habits with those of their peers. Variability in ordering and subgroup analyses by body region were included in these reports. RESULTS: We examined the records of 104,454 patients seen between October 1, 2013 and December 31, 2014. There were 5552 or 21.0% of patients seen during the baseline period that underwent CT imaging. We observed an absolute reduction in imaging of 2.3% (95% confidence interval 1.7-2.8%) after implementation, avoiding approximately $400,000 in costs, 22 days of scanning time, and radiation exposure equivalent to 33,000 chest films annually. These changes occurred across physicians and mid-level providers, regardless of the number years of practice or board certification. CONCLUSIONS: Implementation of a feedback mechanism reduced CT use by emergency medicine practitioners, with concomitant reductions in cost and radiation exposure. The change was similar across levels of medical care. Future studies will examine the effect of the feedback reporting system at other institutions in our hospital network.


Assuntos
Estudos de Avaliação como Assunto , Padrões de Prática Médica/normas , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Revisão da Utilização de Recursos de Saúde/métodos , Competência Clínica/normas , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Humanos , Padrões de Prática Médica/estatística & dados numéricos , Tomografia Computadorizada por Raios X/economia , Revisão da Utilização de Recursos de Saúde/estatística & dados numéricos
14.
Resuscitation ; 85(10): 1330-6, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24992873

RESUMO

BACKGROUND: The use of hands-on defibrillation (HOD) to reduce interruption of chest compression after cardiac arrest has been suggested as a means of improving resuscitation outcomes. The potential dangers of this strategy in regard to exposing rescuers to electrical energy are still being debated. This study seeks to determine the plausible worst-case energy-transfer scenario that rescuers might encounter while performing routine resuscitative measures. METHODS: Six cadavers were acquired and prepared for defibrillation. A custom instrumentation-amplifier circuit was built to measure differential voltages at various points on the bodies. Several skin preparations were used to determine the effects of contact resistance on our voltage measurements. Resistance and exposure voltage data were acquired for a representative number of anatomic landmarks and were used to map rescuers' voltage exposure. A formula for rescuer-received dose (RRD) was derived to represent the proportion of energy the rescuer could receive from a shock delivered to a patient. We used cadaver measurements to estimate a range of RRD. RESULTS: Defibrillation resulted in rescuer exposure voltages ranging from 827V to ∼200V, depending on cadaver and anatomic location. The RRD under the test scenarios ranged from 1 to 8J, which is in excess of accepted energy exposure levels. CONCLUSIONS: HOD using currently available personal protective equipment and resuscitative procedures poses a risk to rescuers. The process should be considered potentially dangerous until equipment and techniques that will protect rescuers are developed.


Assuntos
Cardioversão Elétrica/efeitos adversos , Traumatismos por Eletricidade/etiologia , Auxiliares de Emergência , Traumatismos Ocupacionais/etiologia , Cadáver , Estudos Transversais , Humanos , Medição de Risco
15.
J Emerg Nurs ; 39(5): 502-7, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23657007

RESUMO

INTRODUCTION: Procedural sedation and analgesia is a core competency in emergency medicine. Propofol is replacing midazolam in many emergency departments. Barriers to performing procedural sedation include resource utilization. We hypothesized that emergency nursing time is shorter with propofol than midazolam, without increasing complications. METHODS: Retrospective analysis of a procedural sedation registry for two community emergency departments with combined census of 100,000 patients/year. Demographics, procedure, and ASA physical classification status of adult patients receiving procedural sedation between 2007-2010 with midazolam or propofol were analyzed. Primary outcome was dedicated emergency nursing time. Secondary outcomes were procedural success, ED length of stay, and complication rate. Comparative statistics were performed with Mann-Whitney, Kruskal-Wallis, chi-square, or Fisher's exact test. Linear regression was performed with log-transformed procedural sedation time to define predictors. RESULTS: Of 328 procedural sedation and analgesia, 316 met inclusion criteria, of which 60 received midazolam and 256 propofol. Sex distribution varied between groups (midazolam 3% male; propofol 55% male; P = 0.04). Age, procedure, and ASA status were not significantly different. Propofol had shorter procedural sedation time (propofol 32.5 ± 24.2 minutes; midazolam 78.7 ± 51.5 minutes; P < 0.001) and higher rates of procedural success (propofol 98%; midazolam 92%; P = 0.02). There were no significant differences between complication rates (propofol 14%; midazolam 13%; P = 0.88) or emergency department length of stay (propofol 262.5 ± 132.8 minutes; midazolam 288.6 ± 130.6 minutes; P = 0.09). DISCUSSION: Use of propofol resulted in shorter emergency nursing time and higher procedural success rate than midazolam with a comparable safety profile.


Assuntos
Enfermagem em Emergência/métodos , Enfermagem em Emergência/estatística & dados numéricos , Serviço Hospitalar de Emergência/estatística & dados numéricos , Hipnóticos e Sedativos , Segurança do Paciente/estatística & dados numéricos , Propofol , Feminino , Humanos , Masculino , Midazolam , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de Tempo
16.
Am J Emerg Med ; 30(7): 1296-305, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22795992
17.
Prehosp Disaster Med ; 27(2): 167-71, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22591633

RESUMO

OBJECTIVE: The objective of this study was to determine whether Emergency Medical Services (EMS) records can identify bars that serve a disproportionate number of minors, and if government officials will use this data to direct underage drinker enforcement efforts. METHODS: Emergency Medical Services call logs to all bars in the study area were cross-referenced with a local hospital's records. The records of patients with alcohol-related complaints were analyzed. Outlier bars were identified, and presented to government officials who completed a survey to assess if this information would prompt new enforcement efforts. RESULTS: Emergency Medical Services responded to 149 establishments during the study period. Eighty-four responses were distributed across six bars, and 78 were matched with the hospital's records. Fifty-one patients, 18 (35%) of whom were underage, were treated for alcohol intoxication, with 46% of the cases originating from four bars. Government officials found the information useful, and planned to initiate new operations based on the information. CONCLUSIONS: Alcohol consumption by minors can lead to life-long abuse, with high personal, financial, and societal costs. Emergency Medical Services response data and hospital records can be used to identify bars that allow underage drinking, which is useful in directing law enforcement efforts.


Assuntos
Consumo de Bebidas Alcoólicas/prevenção & controle , Serviços Médicos de Emergência , Sistemas de Informação Hospitalar , Registro Médico Coordenado , Adolescente , Consumo de Bebidas Alcoólicas/legislação & jurisprudência , Baltimore , Feminino , Humanos , Aplicação da Lei , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA