Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Interprof Care ; : 1-6, 2020 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-33290114

RESUMEN

Interprofessional trust is essential for effective team-based care. Medical students are transient members of clinical teams during clerkship rotations and there may be limited focus on developing competency in interprofessional collaboration. Within a pediatric clerkship rotation, we developed a novel simulation activity involving an interprofessional conflict, aiming to foster trusting interprofessional relationships. Active participants included a nurse educator and a medical student participant, with additional students using a checklist to actively observe. The debrief focused on teaching points related to interprofessional competencies and conflict resolution. Students completed a written evaluation immediately following the simulation. Descriptive statistics were used to analyze Likert-type scale questions. Conventional content analysis was used to analyze open-ended responses. Two hundred and fourteen students participated in the simulation between June 2018-June 2019. Most students indicated that the simulation was effective (86%) and improved their confidence to constructively manage disagreements about patient care (88%). Students described anticipated changes in practice including developing their role on the interprofessional team as a medical student, developing a shared mental model, and establishing a shared goal. Our findings suggest that simulation-based learning may present an opportunity for developing interprofessional trust in academic health centers.

2.
Simul Healthc ; 11(3): 209-17, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27254527

RESUMEN

STATEMENT: Debriefing is a critical component in the process of learning through healthcare simulation. This critical review examines the timing, facilitation, conversational structures, and process elements used in healthcare simulation debriefing. Debriefing occurs either after (postevent) or during (within-event) the simulation. The debriefing conversation can be guided by either a facilitator (facilitator-guided) or the simulation participants themselves (self-guided). Postevent facilitator-guided debriefing may incorporate several conversational structures. These conversational structures break the debriefing discussion into a series of 3 or more phases to help organize the debriefing and ensure the conversation proceeds in an orderly manner. Debriefing process elements are an array of techniques to optimize reflective experience and maximize the impact of debriefing. These are divided here into the following 3 categories: essential elements, conversational techniques/educational strategies, and debriefing adjuncts. This review provides both novice and advanced simulation educators with an overview of various methods of conducting healthcare simulation debriefing. Future research will investigate which debriefing methods are best for which contexts and for whom, and also explore how lessons from simulation debriefing translate to debriefing in clinical practice.


Asunto(s)
Retroalimentación , Entrenamiento Simulado , Competencia Clínica , Evaluación Educacional , Humanos , Aprendizaje , Modelos Educacionales
3.
JAMA Pediatr ; 167(6): 528-36, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23608924

RESUMEN

IMPORTANCE: Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. OBJECTIVE: To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. DESIGN Prospective, randomized, factorial study design. SETTING: The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. PARTICIPANTS: We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. INTERVENTION Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. MAIN OUTCOMES AND MEASURES: Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). RESULTS: There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. CONCLUSIONS AND RELEVANCE: The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.


Asunto(s)
Reanimación Cardiopulmonar/educación , Paro Cardíaco/terapia , Maniquíes , Enseñanza/métodos , Competencia Clínica , Método Doble Ciego , Humanos , Lactante , Grupo de Atención al Paciente , Estudios Prospectivos , Grabación en Video
4.
Simul Healthc ; 7(5): 288-94, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-22902606

RESUMEN

INTRODUCTION: This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument's scores demonstrate evidence of validity. METHODS: Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach α was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality. RESULTS: Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach α was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P < 0.001). CONCLUSIONS: The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings.


Asunto(s)
Lista de Verificación/normas , Simulación por Computador , Evaluación Educacional/métodos , Competencia Profesional/normas , Canadá , Conducta de Ayuda , Humanos , Cuerpo Médico de Hospitales , Variaciones Dependientes del Observador , Pediatría , Proyectos Piloto , Psicometría , Reproducibilidad de los Resultados , Resucitación/educación , Estados Unidos , Interfaz Usuario-Computador
5.
Simul Healthc ; 6(2): 71-7, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21358564

RESUMEN

INTRODUCTION: Robustly tested instruments for quantifying clinical performance during pediatric resuscitation are lacking. Examining Pediatric Resuscitation Education through Simulation and Scripting Collaborative was established to conduct multicenter trials of simulation education in pediatric resuscitation, evaluating performance with multiple instruments, one of which is the Clinical Performance Tool (CPT). We hypothesize that the CPT will measure clinical performance during simulated pediatric resuscitation in a reliable and valid manner. METHODS: Using a pediatric resuscitation scenario as a basis, a scoring system was designed based on Pediatric Advanced Life Support algorithms comprising 21 tasks. Each task was scored as follows: task not performed (0 points); task performed partially, incorrectly, or late (1 point); and task performed completely, correctly, and within the recommended time frame (2 points). Study teams at 14 children's hospitals went through the scenario twice (PRE and POST) with an interposed 20-minute debriefing. Both scenarios for each of eight study teams were scored by multiple raters. A generalizability study, based on the PRE scores, was conducted to investigate the sources of measurement error in the CPT total scores. Inter-rater reliability was estimated based on the variance components. Validity was assessed by repeated measures analysis of variance comparing PRE and POST scores. RESULTS: Sixteen resuscitation scenarios were reviewed and scored by seven raters. Inter-rater reliability for the overall CPT score was 0.63. POST scores were found to be significantly improved compared with PRE scores when controlled for within-subject covariance (F1,15 = 4.64, P < 0.05). The variance component ascribable to rater was 2.4%. CONCLUSIONS: Reliable and valid measures of performance in simulated pediatric resuscitation can be obtained from the CPT. Future studies should examine the applicability of trichotomous scoring instruments to other clinical scenarios, as well as performance during actual resuscitations.


Asunto(s)
Reanimación Cardiopulmonar/métodos , Pediatría , Psicometría/métodos , Proyectos de Investigación , Algoritmos , Análisis de Varianza , Niño , Current Procedural Terminology , Evaluación Educacional , Escolaridad , Humanos , Reproducibilidad de los Resultados , Estadística como Asunto , Análisis y Desempeño de Tareas , Grabación en Video
6.
Pediatrics ; 121(3): e597-603, 2008 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-18283069

RESUMEN

BACKGROUND: Competency in pediatric resuscitation is an essential goal of pediatric residency training. Both the exigencies of patient care and the Accreditation Council for Graduate Medical Education require assessment of this competency. Although there are standard courses in pediatric resuscitation, no published, validated assessment tool exists for pediatric resuscitation competency. OBJECTIVE: The purpose of this work was to develop a simulation-based tool for the assessment of pediatric residents' resuscitation competency and to evaluate the tool's reliability and preliminarily its validity in a pilot study. METHODS: We developed a 72-question yes-or-no questionnaire, the Tool for Resuscitation Assessment Using Computerized Simulation, representing 4 domains of resuscitation competency: basic resuscitation, airway support, circulation and arrhythmia management, and leadership behavior. We enrolled 25 subjects at each of 5 different training levels who all participated in 3 standardized code scenarios using the Laerdal SimMan universal patient simulator. Performances were videotaped and then reviewed by 2 independent expert raters. RESULTS: The final version of the tool is presented. The intraclass correlation coefficient between the 2 raters ranged from 0.70 to 0.76 for the 4 domain scores and was 0.80 for the overall summary score. Between the 2 raters, the mean percent exact agreement across items in each domain ranged from 81.0% to 85.1% and averaged 82.1% across all of the items in the tool. Across subject groups, there was a trend toward increasing scores with increased training, which was statistically significant for the airway and summary scores. CONCLUSIONS: In this pilot study, the Tool for Resuscitation Assessment Using Computerized Simulation demonstrated good interrater reliability within each domain and for summary scores. Performance analysis shows trends toward improvement with increasing years of training, providing preliminary construct validity.


Asunto(s)
Reanimación Cardiopulmonar/educación , Competencia Clínica , Simulación por Computador , Internado y Residencia , Adulto , Reanimación Cardiopulmonar/instrumentación , Educación de Postgrado en Medicina/métodos , Femenino , Humanos , Masculino , Variaciones Dependientes del Observador , Pediatría/educación , Proyectos Piloto , Sensibilidad y Especificidad , Encuestas y Cuestionarios
7.
CJEM ; 7(4): 227, 2005 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17355675
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA