Your browser doesn't support javascript.
loading
Pilot study of the DART tool - an objective healthcare simulation debriefing assessment instrument.
Baliga, Kaushik; Coggins, Andrew; Warburton, Sandra; Mathias, Divya; Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P.
Afiliação
  • Baliga K; Sydney Medical School, Westmead Hospital, Block K, Level 6, Sydney, NSW, 2145, Australia.
  • Coggins A; Simulated Learning Environment for Clinical Training (SiLECT), Westmead Hospital, Sydney, NSW, 2145, Australia. andrew.coggins@health.nsw.gov.au.
  • Warburton S; Simulated Learning Environment for Clinical Training (SiLECT), Westmead Hospital, Sydney, NSW, 2145, Australia.
  • Mathias D; The Australian Institute of Medical Simulation and Innovation (AIMSi), Blacktown Hospital, Sydney, NSW, 2148, Australia.
  • Yamada NK; Department of Pediatrics, Division of Neonatal and Developmental Medicine, Stanford University School of Medicine, Palo Alto, CA, USA.
  • Fuerch JH; Department of Pediatrics, Division of Neonatal and Developmental Medicine, Stanford University School of Medicine, Palo Alto, CA, USA.
  • Halamek LP; Department of Pediatrics, Division of Neonatal and Developmental Medicine, Stanford University School of Medicine, Palo Alto, CA, USA.
BMC Med Educ ; 22(1): 636, 2022 Aug 22.
Article em En | MEDLINE | ID: mdl-35989331
ABSTRACT

BACKGROUND:

Various rating tools aim to assess simulation debriefing quality, but their use may be limited by complexity and subjectivity. The Debriefing Assessment in Real Time (DART) tool represents an alternative debriefing aid that uses quantitative measures to estimate quality and requires minimal training to use. The DART is uses a cumulative tally of instructor questions (IQ), instructor statements (IS) and trainee responses (TR). Ratios for IQIS and TR[IQ + IS] may estimate the level of debriefer inclusivity and participant engagement.

METHODS:

Experienced faculty from four geographically disparate university-affiliated simulation centers rated video-based debriefings and a transcript using the DART. The primary endpoint was an assessment of the estimated reliability of the tool. The small sample size confined analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability.

RESULTS:

Ratings for Video A (n = 7), Video B (n = 6), and Transcript A (n = 6) demonstrated mean CV% for IQ (27.8%), IS (39.5%), TR (34.8%), IQIS (40.8%), and TR[IQ + IS] (28.0%). Higher CV% observed in IS and TR may be attributable to rater characterizations of longer contributions as either lumped or split. Lower variances in IQ and TR[IQ + IS] suggest overall consistency regardless of scores being lumped or split.

CONCLUSION:

The DART tool appears to be reliable for the recording of data which may be useful for informing feedback to debriefers. Future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Competência Clínica / Treinamento por Simulação Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Competência Clínica / Treinamento por Simulação Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article