Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Am J Emerg Med ; 43: 77-80, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33545550

RESUMEN

Skin and soft tissue infections, such as cellulitis, are commonly diagnosed in the emergency department and these patients are often admitted to the hospital for intravenous antibiotic therapy. Oritavancin is a novel antibiotic approved for the treatment of skin and soft tissue infections that is administered as a one-time infusion. While oritavancin has demonstrated comparable efficacy with multi-dose parenteral antibiotics in clinical trials and has been proposed as an alternative to admission for emergency department patients, there is a paucity of available real world effectiveness data. In this case series, we describe the characteristics and outcomes of ten patients with high-risk skin and soft tissue infections who received oritavancin and were discharged from the emergency department.


Asunto(s)
Antibacterianos/administración & dosificación , Celulitis (Flemón)/tratamiento farmacológico , Lipoglucopéptidos/administración & dosificación , Infecciones de los Tejidos Blandos/tratamiento farmacológico , Adulto , Anciano , Anciano de 80 o más Años , Servicio de Urgencia en Hospital , Femenino , Humanos , Infusiones Intravenosas , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
2.
Am J Emerg Med ; 46: 160-164, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33071089

RESUMEN

OBJECTIVE: The objective of this study was to compare the rate and clinical yield of computed tomography (CT) imaging between patients presenting with abdominal pain initially seen by a physician in triage (PIT) versus those seen only by physicians working in the main emergency department (ED). METHODS: A retrospective study was conducted of all self-arrivals >18 years old presenting to a single ED with abdominal pain. Nine-hundred patients were randomly selected from both the PIT and traditional patient flow groups and rates and yields of CT imaging were compared, both alone and in a model controlling for potential confounders. Predetermined criteria for CT significance included need for admission, consult, or targeted medications. RESULTS: The overall rate of CT imaging (unadjusted) did not differ between the PIT and traditional groups, 48.7% (95% CI 45.4-51.9) vs. 45.1% (95% CI 41.8-48.4), respectively (p = .13). The CT yield for patients seen in in the PIT group was also similar to that of the traditional group: 49.1% (95% CI 44.4-53.8) vs. 50.5% (95% CI 45.6-55.4) (p = .68). In the logistic regression model, when controlling for age, gender, ESI-acuity, race and insurance payor, PIT vs. traditional was not a predictor of CT ordering (OR 1.14, 95% CI 0.94-1.38). CONCLUSIONS: For patients with abdominal pain, we found no significant differences in rates of CT ordering or CT yield for patients seen in a PIT vs. traditional models, suggesting the increased efficiencies offered by PIT models do not come at the cost of increased or decreased imaging utilization.


Asunto(s)
Dolor Abdominal/diagnóstico por imagen , Servicio de Urgencia en Hospital , Pautas de la Práctica en Medicina/estadística & datos numéricos , Tomografía Computarizada por Rayos X , Triaje , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Radiografía Abdominal , Estudios Retrospectivos
3.
Emerg Med J ; 30(5): 363-70, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-22634831

RESUMEN

BACKGROUND: Emergency department (ED) communication has been demonstrated as requiring improvement and ED patients have repeatedly demonstrated poor comprehension of the care they receive. Through patient focus groups, the authors developed a novel tool designed to improve communication and patient comprehension. STUDY DESIGN: This is a prospective, randomised controlled clinical trial to test the efficacy of a novel, patient-centred communication tool. Patients in a small community hospital ED were randomised to receive the instrument, which was utilised by the entire ED care team and served as a checklist or guide to the patients' ED stay. At the end of the ED stay, patients completed a survey of their comprehension of the care and a communication assessment tool-team survey (a validated instrument to assess satisfaction with communication). Three blinded chart reviewers scored patients' comprehension of their ED care as concordant, partially concordant or discordant with charted care. The authors tested whether there was a difference in satisfaction using a two-sample t test and a difference in comprehension using ordinal logistic regression analysis. RESULTS: 146 patients were enrolled in the study with 72 randomised to receive the communication instrument. There was no significant difference between groups in comprehension (OR=0.65, 95% CI 0.34 to 1.23, p=0.18) or communication assessment tool-team scores (difference=0.2, 95% CI: -3.4 to 3.8, p=0.91). CONCLUSIONS: Using their novel communication tool, the authors were not able to show a statistically significant improvement in either comprehension or satisfaction, though a tendency towards improved comprehension was seen.


Asunto(s)
Comunicación , Comprensión , Servicio de Urgencia en Hospital , Educación del Paciente como Asunto/métodos , Satisfacción del Paciente , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Grupos Focales , Humanos , Masculino , Michigan , Persona de Mediana Edad , Relaciones Profesional-Paciente , Estudios Prospectivos , Adulto Joven
4.
J Patient Saf ; 17(8): e843-e849, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-30395000

RESUMEN

OBJECTIVES: Traditional approaches to safety and quality screening in the emergency department (ED) are porous and low yield for identifying adverse events (AEs). A better approach may be in the use of trigger tool methodology. We recently developed a novel ED trigger tool using a multidisciplinary, multicenter approach. We conducted a multicenter test of this tool and assess its performance. METHODS: In design and participants, we studied the ED trigger tool for a 13-month period at four EDs. All patients 18 years and older with Emergency Severity Index acuity levels of 1 to 3 seen by a provider were eligible. Reviewers completed standardized training modules. Each site reviewed 50 randomly selected visits per month. A first-level reviewer screened for presence of predefined triggers (findings that increase the probability of an AE). If no trigger is present, the review is deemed complete. When present, a trigger prompts an in-depth review for an AE. Any event identified is assigned a level of harm using the Medication Event Reporting and Prevention (MERP) Index, ranging from a near miss (A) to patient death (I). Events are noted as present on arrival or in the ED, an act of commission or omission, and are assigned one of four event categories. A second-level physician performs a confirmatory review of all AEs and independently reviews 10% of cases to estimate the false-negative rate. All AEs or potential AEs were reviewed in monthly group calls for consensus on findings. The primary outcome is the proportion of visits in which an AE is identified, overall and by site. Secondary outcomes include categories of events, distribution of harm ratings, and association of AEs with sociodemographic and clinical factors and triggers. We present sociodemographic data and details about AEs and results of logistic regression for associations of AEs with of triggers, sociodemographics, and clinical variables. RESULTS: We captured 2594 visits that are representative, within site, of their patient population. Overall, the sample is 64% white, 54% female, and with a mean age of 51. Variability is observed between sites for age, race, and insurance, but not sex. A total of 240 events were identified in 228 visits (8.8%) of which 53.3% were present on arrival, 19.7% were acts of omission, and 44.6% were medication-related, with some variability across sites. A MERP F score (contributing to need for admission, higher level of care, or prolonged hospitalization) was the most common severity level (35.4% of events). Overall, 185 (77.1%) of 240 events involved patient harm (MERP level ≥ E), affecting 175 visits (6.7%). Triggers were present in 951 visits (36.6%). Presence of any trigger was strongly associated with an AE (adjusted odds ratio = 4.6, 95% confidence interval = 3.2-6.6). Ten triggers were individually associated with AEs (adjusted odds ratio = 2.1-7.7). Variability was observed across sites in individual trigger associations, event rates, and categories, but not in severity ratings of events. The overall false-negative rate was 6.1%. CONCLUSIONS: The trigger tool approach was successful in identifying meaningful events. The ED trigger tool seems to be a promising approach for identifying all-cause harm in the ED.


Asunto(s)
Servicio de Urgencia en Hospital , Daño del Paciente , Femenino , Humanos , Modelos Logísticos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Seguridad del Paciente
5.
Pediatr Qual Saf ; 6(2): e390, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-38571520

RESUMEN

Introduction: Rapid time to antibiotics (TTA) for pediatric patients with fever and neutropenia in an emergency department decreases in-hospital mortality. Additionally, national guidelines recommend outpatient antibiotic management strategies for low-risk fever and neutropenia (LRFN). This study had two specific aims: (1) improve the percent of patients with suspected fever and neutropenia who receive antibiotics within 60 minutes of arrival from 55% to 90%, and (2) develop and operationalize a process for outpatient management of LRFN patients by October 2018. Methods: Using Lean methodologies, we implemented Plan-Do-Check-Act cycles focused on guideline development, electronic medical record reminders, order-set development, and a LRFN pathway as root causes for improvements. We used statistical process control charts to assess results. Results: The project conducted from July 2016 to October 2018 showed special cause improvement in December 2016 on a G-chart. Monthly Xbar-chart showed improvement in average TTA from 68.5 minutes to 42.5 minutes. A P-chart showed improvement in patients receiving antibiotics within 60 minutes, from 55% to 86.4%. A LRFN guideline and workflow was developed and implemented in October 2017. Conclusions: Implementation of guidelines, electronic medical record reminders, and order sets are useful tools to improve TTA for suspected fever and neutropenia. Utilizing more sensitive statistical process control charts early in projects with fewer patients can help recognize and guide process improvement. The development of workflows for outpatient management of LRFN may be possible, though it requires further study.

6.
Cochrane Database Syst Rev ; (4): CD006657, 2010 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-20393950

RESUMEN

BACKGROUND: Primary malaria prevention on a large scale depends on two vector control interventions: indoor residual spraying (IRS) and insecticide-treated mosquito nets (ITNs). Historically, IRS has reduced malaria transmission in many settings in the world, but the health effects of IRS have never been properly quantified. This is important, and will help compare IRS with other vector control interventions. OBJECTIVES: To quantify the impact of IRS alone, and to compare the relative impacts of IRS and ITNs, on key malariological parameters. SEARCH STRATEGY: We searched the Cochrane Infectious Diseases Group Specialized Register (September 2009), CENTRAL (The Cochrane Library 2009, Issue 3), MEDLINE (1966 to September 2009), EMBASE (1974 to September 2009), LILACS (1982 to September 2009), mRCT (September 2009), reference lists, and conference abstracts. We also contacted researchers in the field, organizations, and manufacturers of insecticides (June 2007). SELECTION CRITERIA: Cluster randomized controlled trials (RCTs), controlled before-and-after studies (CBA) and interrupted time series (ITS) of IRS compared to no IRS or ITNs. Studies examining the impact of IRS on special groups not representative of the general population, or using insecticides and dosages not recommended by the World Health Organization (WHO) were excluded. DATA COLLECTION AND ANALYSIS: Two authors independently reviewed trials for inclusion. Two authors extracted data, assessed risk of bias and analysed the data. Where possible, we adjusted confidence intervals (CIs) for clustering. Studies were grouped into those comparing IRS with no IRS, and IRS compared with ITNs, and then stratified by malaria endemicity. MAIN RESULTS: IRS versus no IRSStable malaria (entomological inoculation rate (EIR) > 1): In one RCT in Tanzania IRS reduced re-infection with malaria parasites detected by active surveillance in children following treatment; protective efficacy (PE) 54%. In the same setting, malaria case incidence assessed by passive surveillance was marginally reduced in children aged one to five years; PE 14%, but not in children older than five years (PE -2%). In the IRS group, malaria prevalence was slightly lower but this was not significant (PE 6%), but mean haemoglobin was higher (mean difference 0.85 g/dL).In one CBA trial in Nigeria, IRS showed protection against malaria prevalence during the wet season (PE 26%; 95% CI 20 to 32%) but not in the dry season (PE 6%; 95% CI -4 to 15%). In one ITS in Mozambique, the prevalence was reduced substantially over a period of 7 years (from 60 to 65% prevalence to 4 to 8% prevalence; the weighted PE before-after was 74% (95% CI 72 to 76%).Unstable malaria (EIR < 1): In two RCTs, IRS reduced the incidence rate of all malaria infections;PE 31% in India, and 88% (95% CI 69 to 96%) in Pakistan. By malaria species, IRS also reduced the incidence of P. falciparum (PE 93%, 95% CI 61 to 98% in Pakistan) and P. vivax (PE 79%, 95% CI 45 to 90% in Pakistan); There were similar impacts on malaria prevalence for any infection: PE 76% in Pakistan; PE 28% in India. When looking separately by parasite species, for P. falciparum there was a PE of 92% in Pakistan and 34% in India; for P. vivax there was a PE of 68% in Pakistan and no impact demonstrated in India (PE of -2%).IRS versus Insecticide Treated Nets (ITNs)Stable malaria (EIR > 1): Only one RCT was done in an area of stable transmission (in Tanzania). When comparing parasitological re-infection by active surveillance after treatment in short-term cohorts, ITNs appeared better, but it was likely not to be significant as the unadjusted CIs approached 1 (risk ratio IRS:ITN = 1.22). When the incidence of malaria episodes was measured by passive case detection, no difference was found in children aged one to five years (risk ratio = 0.88, direction in favour of IRS). No difference was found for malaria prevalence or haemoglobin.Unstable malaria (EIR < 1): Two studies; for incidence and prevalence, the malaria rates were higher in the IRS group compared to the ITN group in one study. Malaria incidence was higher in the IRS arm in India (risk ratio IRS:ITN = 1.48) and in South Africa (risk ratio 1.34 but the cluster unadjusted CIs included 1). For malaria prevalence, ITNs appeared to give better protection against any infection compared to IRS in India (risk ratio IRS:ITN = 1.70) and also for both P. falciparum (risk ratio IRS:ITN = 1.78) and P. vivax (risk ratio IRS:ITN = 1.37). AUTHORS' CONCLUSIONS: Historical and programme documentation has clearly established the impact of IRS. However, the number of high-quality trials are too few to quantify the size of effect in different transmission settings. The evidence from randomized comparisons of IRS versus no IRS confirms that IRS reduces malaria incidence in unstable malaria settings, but randomized trial data from stable malaria settings is very limited. Some limited data suggest that ITN give better protection than IRS in unstable areas, but more trials are needed to compare the effects of ITNs with IRS, as well as to quantify their combined effects.


Asunto(s)
Insectos Vectores , Mosquiteros Tratados con Insecticida , Insecticidas , Malaria/prevención & control , Control de Mosquitos/métodos , África del Sur del Sahara/epidemiología , Animales , Humanos , Incidencia , India/epidemiología , Malaria/epidemiología , Pakistán/epidemiología , Residuos de Plaguicidas , Ensayos Clínicos Controlados Aleatorios como Asunto
7.
West J Emerg Med ; 21(4): 748-751, 2020 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-32726234

RESUMEN

INTRODUCTION: SARS-CoV-2, a novel coronavirus, manifests as a respiratory syndrome (COVID-19) and is the cause of an ongoing pandemic. The response to COVID-19 in the United States has been hampered by an overall lack of diagnostic testing capacity. To address uncertainty about ongoing levels of SARS-CoV-2 community transmission early in the pandemic, we aimed to develop a surveillance tool using readily available emergency department (ED) operations data extracted from the electronic health record (EHR). This involved optimizing the identification of acute respiratory infection (ARI)-related encounters and then comparing metrics for these encounters before and after the confirmation of SARS-CoV-2 community transmission. METHODS: We performed an observational study using operational EHR data from two Midwest EDs with a combined annual census of over 80,000. Data were collected three weeks before and after the first confirmed case of local SARS-CoV-2 community transmission. To optimize capture of ARI cases, we compared various metrics including chief complaint, discharge diagnoses, and ARI-related orders. Operational metrics for ARI cases, including volume, pathogen identification, and illness severity, were compared between the preand post-community transmission timeframes using chi-square tests of independence. RESULTS: Compared to our combined definition of ARI, chief complaint, discharge diagnoses, and isolation orders individually identified less than half of the cases. Respiratory pathogen testing was the top performing individual ARI definition but still only identified 72.2% of cases. From the pre to post periods, we observed significant increases in ED volumes due to ARI and ARI cases without identified pathogen. CONCLUSION: Certain methods for identifying ARI cases in the ED may be inadequate and multiple criteria should be used to optimize capture. In the absence of widely available SARS-CoV-2 testing, operational metrics for ARI-related encounters, especially the proportion of cases involving negative pathogen testing, are useful indicators for active surveillance of potential COVID-19 related ED visits.


Asunto(s)
Betacoronavirus , Infecciones por Coronavirus/transmisión , Registros Electrónicos de Salud , Neumonía Viral/transmisión , COVID-19 , Prueba de COVID-19 , Técnicas de Laboratorio Clínico , Infecciones por Coronavirus/diagnóstico , Servicio de Urgencia en Hospital , Humanos , Pandemias , Neumonía Viral/diagnóstico , SARS-CoV-2
8.
J Patient Saf ; 16(4): e245-e249, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-28661998

RESUMEN

OBJECTIVES: Quality and safety review for performance improvement is important for systems of care and is required for US academic emergency departments (EDs). Assessment of the impact of patient safety initiatives in the context of increasing burdens of quality measurement compels standardized, meaningful, high-yield approaches for performance review. Limited data describe how quality and safety reviews are currently conducted and how well they perform in detecting patient harm and areas for improvement. We hypothesized that decades-old approaches used in many academic EDs are inefficient and low yield for identifying patient harm. METHODS: We conducted a prospective observational study to evaluate the efficiency and yield of current quality review processes at five academic EDs for a 12-month period. Sites provided descriptions of their current practice and collected summary data on the number and severity of events identified in their reviews and the referral sources that led to their capture. Categories of common referral sources were established at the beginning of the study. Sites used the Institute for Healthcare Improvement's definition in defining an adverse event and a modified National Coordinating Council for Medication Error Reporting and Prevention (MERP) Index for grading severity of events. RESULTS: Participating sites had similar processes for quality review, including a two-level review process, monthly reviews and conferences, similar screening criteria, and a grading system for evaluating cases. In 60 months of data collection, we reviewed a total of 4735 cases and identified 381 events. This included 287 near-misses, errors/events (MERP A-I) and 94 adverse events (AEs) (MERP E-I). The overall AE rate (event rate with harm) was 1.99 (95% confidence interval = 1.62%-2.43%), ranging from 1.24% to 3.47% across sites. The overall rate of quality concerns (events without harm) was 6.06% (5.42%-6.78%), ranging from 2.96% to 10.95% across sites. Seventy-two-hour returns were the most frequent referral source used, accounting for 47% of the cases reviewed but with a yield of only 0.81% in identifying harm. Other referral sources similarly had very low yields. External referrals were the highest yield referral source, with 14.34% (10.64%-19.03%) identifying AEs. As a percentage of the 94 AEs identified, external referrals also accounted for 41.49% of cases. CONCLUSIONS: With an overall adverse event rate of 1.99%, commonly used referral sources seem to be low yield and inefficient for detecting patient harm. Approximately 6% of the cases identified by these criteria yielded a near miss or quality concern. New approaches to quality and safety review in the ED are needed to optimize their yield and efficiency for identifying harm and areas for improvement.


Asunto(s)
Servicio de Urgencia en Hospital/normas , Seguridad del Paciente/normas , Calidad de la Atención de Salud/normas , Humanos , Estudios Prospectivos , Estados Unidos
9.
J Patient Saf ; 16(1): e11-e17, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-27314201

RESUMEN

OBJECTIVE: This study aimed to develop an emergency department (ED) trigger tool to improve the identification of adverse events in the ED and that can be used to direct patient safety and quality improvement. This work describes the first step toward the development of an ED all-cause harm measurement tool by experts in the field. METHODS: We identified a multidisciplinary group of emergency medicine safety experts from whom we solicited candidate triggers. We then conducted a modified Delphi process consisting of 4 stages as follows: (1) a systematic literature search and review, including an independent oversampling of review for inclusion, (2) solicitation of empiric triggers from participants, (3) a Web-based survey ranking triggers on specific performance constructs, and (4) a final in-person meeting to arrive at consensus triggers for testing. Results of each step were shared with participants between each stage. RESULTS: Among an initial 804 unique articles found using our search criteria, we identified 94 that were suitable for further review. Interrater reliability was high (κ = 0.80). Review of these articles yielded 56 candidate triggers. These were supplemented by 58 participant-submitted triggers yielding a total of 114 candidate triggers that were shared with team members electronically along with their definitions. Team members then voted on each measure via a Web-based survey, ranking triggers on their face validity, utility for quality improvement, and fidelity (sensitivity/specificity). Participants were also provided the ability to flag any trigger about which they had questions or they felt merited further discussion at the in-person meeting. Triggers were ranked by combining the first 2 categories (face validity and utility), and information on fidelity was reviewed for decision making at the in-person meeting. Seven redundant triggers were eliminated. At an in-person meeting including representatives from all facilities, we presented the 50 top-ranked triggers as well as those that were flagged on the survey by 2 or more participants. We reviewed each trigger individually, identifying 41 triggers about which there was a clear agreement for inclusion. Of the seven additional triggers that required subsequent voting via e-mail, 5 were adopted, arriving at a total of 46 consensus-derived triggers. CONCLUSIONS: Our modified Delphi process resulted in the identification of 46 final triggers for the detection of adverse events among ED patients. These triggers should be pilot field tested to quantify their individual and collective performance in detecting all-cause harm to ED patients.


Asunto(s)
Técnica Delphi , Mejoramiento de la Calidad/normas , Servicio de Urgencia en Hospital/normas , Humanos , Reproducibilidad de los Resultados
10.
PLoS Med ; 6(4): e1000055, 2009 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-19365539

RESUMEN

BACKGROUND: Although the molecular basis of resistance to a number of common antimalarial drugs is well known, a geographic description of the emergence and dispersal of resistance mutations across Africa has not been attempted. To that end we have characterised the evolutionary origins of antifolate resistance mutations in the dihydropteroate synthase (dhps) gene and mapped their contemporary distribution. METHODS AND FINDINGS: We used microsatellite polymorphism flanking the dhps gene to determine which resistance alleles shared common ancestry and found five major lineages each of which had a unique geographical distribution. The extent to which allelic lineages were shared among 20 African Plasmodium falciparum populations revealed five major geographical groupings. Resistance lineages were common to all sites within these regions. The most marked differentiation was between east and west African P. falciparum, in which resistance alleles were not only of different ancestry but also carried different resistance mutations. CONCLUSIONS: Resistant dhps has emerged independently in multiple sites in Africa during the past 10-20 years. Our data show the molecular basis of resistance differs between east and west Africa, which is likely to translate into differing antifolate sensitivity. We have also demonstrated that the dispersal patterns of resistance lineages give unique insights into recent parasite migration patterns.


Asunto(s)
Antimaláricos/farmacología , Dihidropteroato Sintasa/genética , Resistencia a Medicamentos/genética , Malaria Falciparum/tratamiento farmacológico , Proteínas de Transporte de Membrana/genética , Plasmodium falciparum/efectos de los fármacos , Proteínas Protozoarias/genética , África/epidemiología , Alelos , Animales , Antimaláricos/uso terapéutico , Cloroquina/farmacología , Cloroquina/uso terapéutico , ADN Protozoario/genética , Combinación de Medicamentos , Humanos , Malaria Falciparum/epidemiología , Malaria Falciparum/parasitología , Malaria Falciparum/prevención & control , Repeticiones de Microsatélite , Filogenia , Plasmodium falciparum/enzimología , Plasmodium falciparum/genética , Plasmodium falciparum/aislamiento & purificación , Polimorfismo de Nucleótido Simple , Vigilancia de la Población , Pirimetamina/farmacología , Pirimetamina/uso terapéutico , Selección Genética , Sulfadoxina/farmacología , Sulfadoxina/uso terapéutico
11.
West J Emerg Med ; 20(3): 454-459, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-31123545

RESUMEN

INTRODUCTION: Most emergency departments (ED) use patient experience surveys (i.e., Press Ganey) that include specific physician assessment fields. Our ED group currently staffs two EDs - one at a large, tertiary-care hospital, and the other at a small, affiliated, community site. Both are staffed by the same physicians. The goals of this study were to determine whether Press Ganey ED satisfaction scores for emergency physicians working at two different sites were consistent between sites, and to identify factors contributing to any variation. METHODS: We conducted a retrospective study of patients seen at either ED between September 2015 and March 2016 who returned a Press Ganey satisfaction survey. We compiled a database linking the patient visit with his or her responses on a 1-5 scale to questions that included "overall rating of emergency room care" and five physician-specific questions. Operational metrics including time to room, time to physician, overall length of stay, labs received, prescriptions received, demographic data, and the attending physician were also linked. We averaged scores for physicians staffing both EDs and compared them between sites using t-tests. Multiple logistic regression was used to determine the impact of visit-specific metrics on survey scores. RESULTS: A total of 1,012 ED patients met the inclusion criteria (site 1=457; site 2=555). The overall rating-of-care metric was significantly lower at the tertiary-care hospital ED compared to our lower volume ED (4.30 vs 4.65). The same trend was observed when the five doctor-specific metrics were summed (22.06 vs 23.32). Factors that correlated with higher scores included arrival-to-first-attending time (p=0.013) and arrival-to-ED-departure time (p=0.038), both of which were longer at the tertiary-care hospital ED. CONCLUSION: Press Ganey satisfaction scores for the same group of emergency physicians varied significantly between sites. This suggests that these scores are more dependent on site-specific factors, such as wait times, than a true representation of the quality of care provided by the physician.


Asunto(s)
Servicios Médicos de Urgencia/normas , Médicos/psicología , Actitud del Personal de Salud , Servicio de Urgencia en Hospital/estadística & datos numéricos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Satisfacción del Paciente/estadística & datos numéricos , Satisfacción Personal , Proyectos de Investigación , Estudios Retrospectivos , Encuestas y Cuestionarios
12.
Acad Emerg Med ; 26(6): 670-679, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30859666

RESUMEN

OBJECTIVES: An adverse event (AE) is a physical harm experienced by a patient due to health care, requiring intervention. Describing and categorizing AEs is important for quality and safety assessment and identifying areas for improvement. Safety science suggests that improvement efforts should focus on preventing and mitigating harm rather than on error, which is commonplace but infrequently leads to AEs. Most taxonomies fail to describe harm experienced by patients (e.g., hypoxia, hemorrhage, anaphylaxis), focusing instead on errors, and use categorizations that are too broad to be useful (e.g., "communication error"). We set out to create a patient-centered, emergency department (ED)-specific framework for describing AEs and near misses to advance quality and safety in the acute care setting. METHODS: We performed a critical review of existing taxonomies of harm, evaluating their applicability to the ED. We identified and adopted a classification framework and developed a taxonomy using an iterative process categorizing approximately 600 previously identified AEs and near misses. We reviewed this taxonomy with collaborators at four medical centers, receiving feedback and providing clarification. We then disseminated a set of representative scenarios for these safety experts to categorize independently using the taxonomy. We calculated interrater reliability and performance compared to our criterion standard. RESULTS: Our search identified candidate taxonomies for detailed review. We selected the Adventist Health Systems AE taxonomy and modified this for use in the ED, adopting a framework of categories, subcategories, and up to three modifiers to further describe events. On testing, overall reviewer agreement with the criterion standard was 92% at the category level and 88% at the subcategory level. Three of the four raters concurred in 55 of 59 scenarios (93%) and all four concurred in 46 of 59 scenarios (78%). At the subcategory level, there was complete agreement in 40 of 59 (68%) scenarios and majority agreement in 55 of 59 instances (93%). Performance of individual raters ranged from very good (88%, 52/59) to near perfect (98%, 58/59) at the main category level. CONCLUSIONS: We developed a taxonomy of AEs and near misses for the ED, modified from an existing framework. Testing of the tool with minimal training yielded high performance and good inter-rater reliability. This taxonomy can be adapted and modified by EDs seeking to enhance their quality and safety reviews and characterize harm occurring in their EDs for quality improvement purposes.


Asunto(s)
Servicio de Urgencia en Hospital/normas , Errores Médicos/clasificación , Potencial Evento Adverso/clasificación , Gestión de Riesgos/métodos , Humanos , Mejoramiento de la Calidad , Reproducibilidad de los Resultados
13.
Am J Trop Med Hyg ; 78(2): 256-61, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-18256426

RESUMEN

The prevalence and frequency of the dihydrofolate reductase (dhfr) and dihydropteroate synthetase (dhps) mutations associated with sulfadoxine-pyrimethamine (SP) resistance at 13 sentinel surveillance sites in southern Mozambique were examined regularly between 1999 and 2004. Frequency of the dhfr triple mutation increased from 0.26 in 1999 to 0.96 in 2003, remaining high in 2004. The dhps double mutation frequency peaked in 2001 (0.22) but declined to baseline levels (0.07) by 2004. Similarly, parasites with both dhfr triple and dhps double mutations had increased in 2001 (0.18) but decreased by 2004 (0.05). The peaking of SP resistance markers in 2001 coincided with a SP-resistant malaria epidemic in neighboring KwaZulu-Natal, South Africa. The decline in dhps (but not dhfr) mutations corresponded with replacement of SP with artemether-lumefantrine as malaria treatment policy in KwaZulu-Natal. Our results show that drug pressure can exert its influence at a regional level rather than merely at a national level.


Asunto(s)
Dihidropteroato Sintasa/genética , Resistencia a Medicamentos/genética , Mutación , Plasmodium falciparum/genética , Tetrahidrofolato Deshidrogenasa/genética , Adolescente , Animales , Niño , Preescolar , ADN Protozoario/análisis , Resistencia a Múltiples Medicamentos/genética , Femenino , Flujo Génico , Frecuencia de los Genes , Genotipo , Humanos , Malaria Falciparum/tratamiento farmacológico , Malaria Falciparum/epidemiología , Masculino , Mozambique , Mutación/efectos de los fármacos , Mutación/genética , Plasmodium falciparum/efectos de los fármacos , Plasmodium falciparum/enzimología
14.
Malar J ; 7: 258, 2008 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-19091114

RESUMEN

BACKGROUND: Five large insecticide-treated net (ITN) programmes and two indoor residual spraying (IRS) programmes were compared using a standardized costing methodology. METHODS: Costs were measured locally or derived from existing studies and focused on the provider perspective, but included the direct costs of net purchases by users, and are reported in 2005 USD. Effectiveness was estimated by combining programme outputs with standard impact indicators. FINDINGS: Conventional ITNs: The cost per treated net-year of protection ranged from USD 1.21 in Eritrea to USD 6.05 in Senegal. The cost per child death averted ranged from USD 438 to USD 2,199 when targeting to children was successful.Long-lasting insecticidal nets (LLIN) of five years duration: The cost per treated-net year of protection ranged from USD 1.38 in Eritrea to USD 1.90 in Togo. The cost per child death averted ranged from USD 502 to USD 692.IRS: The costs per person-year of protection for all ages were USD 3.27 in KwaZulu Natal and USD 3.90 in Mozambique. If only children under five years of age were included in the denominator the cost per person-year of protection was higher: USD 23.96 and USD 21.63. As a result, the cost per child death averted was higher than for ITNs: USD 3,933-4,357. CONCLUSION: Both ITNs and IRS are highly cost-effective vector control strategies. Integrated ITN free distribution campaigns appeared to be the most efficient way to rapidly increase ITN coverage. Other approaches were as or more cost-effective, and appeared better suited to "keep-up" coverage levels. ITNs are more cost-effective than IRS for highly endemic settings, especially if high ITN coverage can be achieved with some demographic targeting.


Asunto(s)
Malaria/epidemiología , Malaria/prevención & control , Control de Mosquitos/economía , Control de Mosquitos/métodos , África/epidemiología , Ropa de Cama y Ropa Blanca/economía , Preescolar , Análisis Costo-Beneficio , Humanos , Lactante
15.
WMJ ; 117(5): 214-218, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-30674099

RESUMEN

INTRODUCTION: Patient "handoffs" or "sign outs" in medicine are widely recognized as highly vulnerable times for medical errors to occur. The Emergency Department (ED) has been identified as an environment where these transitions of care at shift changes are particularly high-risk due to a variety of factors, including frequent interruptions, which can further lead to errors in transfer of information. Our primary objective was to evaluate whether simple interventions could minimize interruptions during the sign out period in an attempt to improve patient safety. METHODS: Multiple low-cost interventions were implemented, including an overhead chime, clerical staff diversion of phone calls and electrocardiograms, and prominent positioning of a movable pedestal sign. Utilizing a before-and-after study design, we directly observed team sign outs at various shift changes throughout the day over 2-month periods before and after implementation. Our primary outcome measure was the number of interruptions that occurred during designated sign out times. We also assessed total time spent in sign out, and a survey was sent to clinicians to assess their perception of sign out safety. RESULTS: Total sign out interruptions were significantly decreased as a result of the above-noted interventions (average 6.1 vs 1.1; P < 0.01). Total time spent during sign out was reduced (14.1 vs 11.4 minutes; P < 0.04), and clinicians' perception of safety improved significantly, with Likert scores of 4 or 5 on a 5 point scale increasing from 47.4% before to 91.7% after implementation. CONCLUSION: Patient sign out at shift change is a vulnerable time for patient safety and transition of care with interruptions further compromising the safe transfer of information. Simple interventions significantly decreased interruptions and were associated with shorter sign out periods and improved provider perception of sign out safety.


Asunto(s)
Servicio de Urgencia en Hospital/organización & administración , Errores Médicos/prevención & control , Pase de Guardia/normas , Seguridad del Paciente , Centros Médicos Académicos , Humanos , Mejoramiento de la Calidad , Encuestas y Cuestionarios , Wisconsin
16.
West J Emerg Med ; 19(2): 392-397, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29560071

RESUMEN

INTRODUCTION: Opioid prescribing patterns have come under increasing scrutiny with the recent rise in opioid prescriptions, opioid misuse and abuse, and opioid-related adverse events. To date, there have been limited studies on the effect of default tablet quantities as part of emergency department (ED) electronic order entry. Our goal was to evaluate opioid prescribing patterns before and after the removal of a default quantity of 20 tablets from ED electronic order entry. METHODS: We performed a retrospective observational study at a single academic, urban ED with 58,000 annual visits. We identified all adult patients (18 years or older) seen in the ED and discharged home with prescriptions for tablet forms of hydrocodone and oxycodone (including mixed formulations with acetaminophen). We compared the quantity of tablets prescribed per opioid prescription 12 months before and 10 months after the electronic order-entry prescription default quantity of 20 tablets was removed and replaced with no default quantity. No specific messaging was given to providers, to avoid influencing prescribing patterns. We used two-sample Wilcoxon rank-sum test, two-sample test of proportions, and Pearson's chi-squared tests where appropriate for statistical analysis. RESULTS: A total of 4,104 adult patients received discharge prescriptions for opioids in the pre-intervention period (151.6 prescriptions per 1,000 discharged adult patients), and 2,464 post-intervention (106.69 prescriptions per 1,000 discharged adult patients). The median quantity of opioid tablets prescribed decreased from 20 (interquartile ration [IQR] 10-20) to 15 (IQR 10-20) (p<0.0001) after removal of the default quantity. While the most frequent quantity of tablets received in both groups was 20 tablets, the proportion of patients who received prescriptions on discharge that contained 20 tablets decreased from 0.5 (95% confidence interval [CI] [0.48-0.52]) to 0.23 (95% CI [0.21-0.24]) (p<0.001) after default quantity removal. CONCLUSION: Although the median number of tablets differed significantly before and after the intervention, the clinical significance of this is unclear. An observed wider distribution of the quantity of tablets prescribed after removal of the default quantity of 20 may reflect more appropriate prescribing patterns (i.e., less severe indications receiving fewer tabs and more severe indications receiving more). A default value of 20 tablets for opioid prescriptions may be an example of the electronic medical record's ability to reduce practice variability in medication orders actually counteracting optimal patient care.


Asunto(s)
Analgésicos Opioides/uso terapéutico , Registros Electrónicos de Salud/normas , Servicio de Urgencia en Hospital/organización & administración , Pautas de la Práctica en Medicina/organización & administración , Registros Electrónicos de Salud/organización & administración , Humanos , Hidrocodona/uso terapéutico , Oxicodona/uso terapéutico , Estudios Retrospectivos
17.
Am J Trop Med Hyg ; 76(6): 1027-32, 2007 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-17556606

RESUMEN

The Bioko Island Malaria Control Project (BIMCP) has carried out intensive interventions since early 2004 to reduce malaria transmission through indoor residual spraying (IRS) and case management. Annual parasite prevalence surveys have been carried out to monitor the effectiveness of the program. Significant overall reductions in prevalence of infection have been observed, with 42% fewer infections occurring in 2006 compared with baseline. Nevertheless, there is evidence of considerable heterogeneity in impact of the intervention. Prevalence of infection was significantly associated with spray status of the child's house, spray coverage with effective insecticide of the neighborhood of the house, bed net use, and time elapsed since last spray. Careful scheduling of spray coverage is therefore essential to maximize the effectiveness of IRS and to ensure consistent reductions in parasite prevalence. This can only be achieved if comprehensive monitoring systems are in place for both the management and evaluation of the intervention.


Asunto(s)
Fumigación/métodos , Control de Insectos/métodos , Malaria Falciparum/prevención & control , Plasmodium falciparum/crecimiento & desarrollo , Adolescente , Animales , Antimaláricos/uso terapéutico , Niño , Preescolar , Planificación de Ciudades , Guinea Ecuatorial/epidemiología , Fumigación/normas , Humanos , Insecticidas , Modelos Logísticos , Malaria Falciparum/tratamiento farmacológico , Malaria Falciparum/epidemiología , Prevalencia , Población Rural
18.
Am J Trop Med Hyg ; 76(1): 42-7, 2007 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-17255227

RESUMEN

The Lubombo Spatial Development Initiative is a joint development program between the governments of Mozambique, Swaziland, and South Africa, which includes malaria control as a core component of the initiative. Vector control through indoor residual spraying (IRS) was incrementally introduced in southern Mozambique between November 2000 and February 2004. Surveillance to monitor its impact was conducted by annual cross-sectional surveys to assess the prevalence of Plasmodium falciparum infection, entomologic monitoring, and malaria case notification in neighboring South Africa and Swaziland. In southern Mozambique, there was a significant reduction in P. falciparum prevalence after the implementation of IRS, with an overall relative risk of 0.74 for each intervention year (P < 0.001), ranging from 0.66 after the first year to 0.93 after the fifth intervention year. Substantial reductions in notified malaria cases were reported in South Africa and Swaziland over the same period. The success of the program in reducing malaria transmission throughout the target area provides a strong argument for investment in regional malaria control.


Asunto(s)
Malaria Falciparum/prevención & control , Adolescente , Animales , Antimaláricos/uso terapéutico , Niño , Preescolar , Esuatini/epidemiología , Humanos , Insectos Vectores , Cooperación Internacional , Malaria Falciparum/epidemiología , Control de Mosquitos/métodos , Mozambique/epidemiología , Plasmodium falciparum/aislamiento & purificación , Prevalencia , Sudáfrica/epidemiología , Factores de Tiempo
19.
Malar J ; 6: 52, 2007 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-17474975

RESUMEN

BACKGROUND: A comprehensive malaria control intervention was initiated in February 2004 on Bioko Island, Equatorial Guinea. This manuscript reports on the continuous entomological monitoring of the indoor residual spray (IRS) programme during the first two years of its implementation. METHODS: Mosquitoes were captured daily using window traps at 16 sentinel sites and analysed for species identification, sporozoite rates and knockdown resistance (kdr) using polymerase chain reaction (PCR) to assess the efficacy of the vector control initiative from December 2003 to December 2005. RESULTS: A total of 2,807 and 10,293 Anopheles funestus and Anopheles gambiae s.l. respectively were captured throughout the study period. Both M and S molecular forms of An. gambiae s.s. and Anopheles melas were identified. Prior to the first round of IRS, sporozoite rates were 6.0, 8.3 and 4.0 for An. gambiae s.s., An. melas and An. funestus respectively showing An. melas to be an important vector in areas in which it occurred. After the third spray round, no infective mosquitoes were identified. After the first spray round using a pyrethroid spray the number of An. gambiae s.s. were not reduced due to the presence of the kdr gene but An funestus and An. melas populations declined from 23.5 to 3.1 and 5.3 to 0.8 per trap per 100 nights respectively. After the introduction of a carbamate insecticide in the second round, An. gambiae s.s. reduced from 25.5 to 1.9 per trap per 100 nights and An. funestus and An. melas remained at very low levels. Kdr was found only in the M-form of An. gambiae s.s. with the highest frequency at Punta Europa (85%). CONCLUSION: All three vectors that were responsible for malaria transmission before the start of the intervention were successfully controlled once an effective insecticide was used. Continuous entomological surveillance including resistance monitoring is of critical importance in any IRS based malaria vector control programme. This paper demonstrates that sufficient resources for such monitoring should be included in any proposal in order to avoid programme failures.


Asunto(s)
Anopheles , Fumigación/métodos , Insectos Vectores , Insecticidas , Malaria/prevención & control , Control de Mosquitos/métodos , Animales , Anopheles/crecimiento & desarrollo , Carbamatos/farmacología , Guinea Ecuatorial , Genes de Insecto , Humanos , Insectos Vectores/crecimiento & desarrollo , Malaria/transmisión , Reacción en Cadena de la Polimerasa , Piretrinas/farmacología , Esporozoítos/crecimiento & desarrollo
20.
Malar J ; 6: 142, 2007 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-17973989

RESUMEN

BACKGROUND: Indoor residual spraying (IRS) has again become popular for malaria control in Africa. This combined with the affirmation by WHO that DDT is appropriate for use in the absence of longer lasting insecticide formulations in some malaria endemic settings, has resulted in an increase in IRS with DDT as a major malaria vector control intervention in Africa. DDT was re-introduced into Mozambique's IRS programme in 2005 and is increasingly becoming the main insecticide used for malaria vector control in Mozambique. The selection of DDT as the insecticide of choice in Mozambique is evidence-based, taking account of the susceptibility of Anopheles funestus to all available insecticide choices, as well as operational costs of spraying. Previously lambda cyhalothrin had replaced DDT in Mozambique in 1993. However, resistance appeared quickly to this insecticide and, in 2000, the pyrethroid was phased out and the carbamate bendiocarb introduced. Low level resistance was detected by biochemical assay to bendiocarb in 1999 in both An. funestus and Anopheles arabiensis, although this was not evident in WHO bioassays of the same population. METHODS: Sentinel sites were established and monitored for insecticide resistance using WHO bioassays. These assays were conducted on 1-3 day old F1 offspring of field collected adult caught An. funestus females to determine levels of insecticide resistance in the malaria vector population. WHO biochemical assays were carried out to determine the frequency of insecticide resistance genes within the same population. RESULTS: In surveys conducted between 2002 and 2006, low levels of bendiocarb resistance were detected in An. funestus, populations using WHO bioassays. This is probably due to significantly elevated levels of Acetylcholinesterase levels found in the same populations. Pyrethroid resistance was also detected in populations and linked to elevated levels of p450 monooxygenase activity. One site had shown reduction in pyrethroid resistance since the base line in 1999.


Asunto(s)
Anopheles , Insectos Vectores , Resistencia a los Insecticidas , Insecticidas , Malaria/prevención & control , Control de Mosquitos/métodos , Animales , DDT , Femenino , Fumigación/economía , Fumigación/métodos , Vivienda , Control de Mosquitos/economía , Mozambique
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA