Assuntos
Relações Médico-Paciente , Médicos , Serviço Hospitalar de Emergência , Humanos , PacientesRESUMO
Background: Didactics play a key role in medical education. There is no standardized didactic evaluation tool to assess quality and provide feedback to instructors. Cognitive load theory provides a framework for lecture evaluations. We sought to develop an evaluation tool, rooted in cognitive load theory, to assess quality of didactic lectures. Methods: We used a modified Delphi method to achieve expert consensus for items in a lecture evaluation tool. Nine emergency medicine educators with expertise in cognitive load participated in three modified Delphi rounds. In the first two rounds, experts rated the importance of including each item in the evaluation rubric on a 1 to 9 Likert scale with 1 labeled as "not at all important" and 9 labeled as "extremely important." In the third round, experts were asked to make a binary choice of whether the item should be included in the final evaluation tool. In each round, the experts were invited to provide written comments, edits, and suggested additional items. Modifications were made between rounds based on item scores and expert feedback. We calculated descriptive statistics for item scores. Results: We completed three Delphi rounds, each with 100% response rate. After Round 1, we removed one item, made major changes to two items, made minor wording changes to nine items, and modified the scale of one item. Following Round 2, we eliminated three items, made major wording changes to one item, and made minor wording changes to one item. After the third round, we made minor wording changes to two items. We also reordered and categorized items for ease of use. The final evaluation tool consisted of nine items. Conclusions: We developed a lecture assessment tool rooted in cognitive load theory specific to medical education. This tool can be applied to assess quality of instruction and provide important feedback to speakers.
RESUMO
BACKGROUND: Successful trauma resuscitation relies on multi-disciplinary collaboration. In most academic programs, general surgery (GS) and emergency medicine (EM) residents rarely train together before functioning as a team. METHODS: In our Multi-Disciplinary Trauma Evaluation and Management Simulation (MD-TEAMS), EM and GS residents completed manikin-based trauma scenarios and were evaluated on resuscitation and communication skills. Residents were surveyed on confidence surrounding training objectives. RESULTS: Residents showed improved confidence running trauma scenarios in multi-disciplinary teams. Residents received lower communication scores from same-discipline vs cross-discipline faculty. EM residents scored higher in evaluation and planning domains; GS residents scored higher in action processes; groups scored equally in team management. Strong correlation existed between team leader communication and resuscitative skill completion. CONCLUSION: MD-TEAMS demonstrated correlation between communication and resuscitation checklist item completion and communication differences by resident specialty. In the future, we plan to evaluate training-related resident behavior changes and specialty-specific communication differences by residents.