Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
BMC Med Educ ; 22(1): 636, 2022 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-35989331

RESUMEN

BACKGROUND: Various rating tools aim to assess simulation debriefing quality, but their use may be limited by complexity and subjectivity. The Debriefing Assessment in Real Time (DART) tool represents an alternative debriefing aid that uses quantitative measures to estimate quality and requires minimal training to use. The DART is uses a cumulative tally of instructor questions (IQ), instructor statements (IS) and trainee responses (TR). Ratios for IQ:IS and TR:[IQ + IS] may estimate the level of debriefer inclusivity and participant engagement. METHODS: Experienced faculty from four geographically disparate university-affiliated simulation centers rated video-based debriefings and a transcript using the DART. The primary endpoint was an assessment of the estimated reliability of the tool. The small sample size confined analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability. RESULTS: Ratings for Video A (n = 7), Video B (n = 6), and Transcript A (n = 6) demonstrated mean CV% for IQ (27.8%), IS (39.5%), TR (34.8%), IQ:IS (40.8%), and TR:[IQ + IS] (28.0%). Higher CV% observed in IS and TR may be attributable to rater characterizations of longer contributions as either lumped or split. Lower variances in IQ and TR:[IQ + IS] suggest overall consistency regardless of scores being lumped or split. CONCLUSION: The DART tool appears to be reliable for the recording of data which may be useful for informing feedback to debriefers. Future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.


Asunto(s)
Competencia Clínica , Entrenamiento Simulado , Simulación por Computador , Atención a la Salud , Humanos , Proyectos Piloto , Reproducibilidad de los Resultados
2.
Adv Simul (Lond) ; 8(1): 9, 2023 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-36918946

RESUMEN

BACKGROUND: Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings. METHODS: This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments. RESULTS: The 8 debriefings (µ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions µ = 14.7%, instructor statements µ = 34.1%, and trainee responses µ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers. CONCLUSION: The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.

3.
Adv Simul (Lond) ; 7(1): 7, 2022 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-35256014

RESUMEN

BACKGROUND: Debriefing is an essential skill for simulation educators and feedback for debriefers is recognised as important in progression to mastery. Existing assessment tools, such as the Debriefing Assessment for Simulation in Healthcare (DASH), may assist in rating performance but their utility is limited by subjectivity and complexity. Use of quantitative data measurements for feedback has been shown to improve performance of clinicians but has not been studied as a focus for debriefer feedback. METHODS: A multi-centre sample of interdisciplinary debriefings was observed. Total debriefing time, length of individual contributions and demographics were recorded. DASH scores from simulation participants, debriefers and supervising faculty were collected after each event. Conversational diagrams were drawn in real-time by supervising faculty using an approach described by Dieckmann. For each debriefing, the data points listed above were compiled on a single page and then used as a focus for feedback to the debriefer. RESULTS: Twelve debriefings were included (µ = 6.5 simulation participants per event). Debriefers receiving feedback from supervising faculty were physicians or nurses with a range of experience (n = 7). In 9/12 cases the ratio of debriefer to simulation participant contribution length was ≧ 1:1. The diagrams for these debriefings typically resembled a fan-shape. Debriefings (n = 3) with a ratio < 1:1 received higher DASH ratings compared with the ≧ 1:1 group (p = 0.038). These debriefings generated star-shaped diagrams. Debriefer self-rated DASH scores (µ = 5.08/7.0) were lower than simulation participant scores (µ = 6.50/7.0). The differences reached statistical significance for all 6 DASH elements. Debriefers evaluated the 'usefulness' of feedback and rated it 'highly' (µ= 4.6/5). CONCLUSION: Basic quantitative data measures collected during debriefings may represent a useful focus for immediate debriefer feedback in a healthcare simulation setting.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA