Subject(s)
Cells , Journalism, Medical , Aptitude , Communication , Female , Formative Feedback , Humans , Medical Laboratory Personnel/psychology , Research , Research Personnel/psychologyABSTRACT
BACKGROUND: Although polyp size dictates surveillance intervals, endoscopists often estimate polyp size inaccurately. We hypothesized that an intervention providing didactic instruction and real-time feedback could significantly improve polyp size classification. METHODS: We conducted a multicenter randomized controlled trial to evaluate the impact of different components of an online educational module on polyp sizing. Participants were randomized to control (no video, no feedback), video only, feedback only, or video + feedback. The primary outcome was accuracy of polyp size classification into clinically relevant categories (diminutive [1-5mm], small [6-9mm], large [≥10mm]). Secondary outcomes included accuracy of exact polyp size (inmm), learning curves, and directionality of inaccuracy (over- vs. underestimation). RESULTS: 36 trainees from five training programs provided 1360 polyp size assessments. The feedback only (80.1%, P=0.01) and video + feedback (78.9%, P=0.02) groups had higher accuracy of polyp size classification compared with controls (71.6%). There was no significant difference in accuracy between the video only group (74.4%) and controls (P=0.42). Groups receiving feedback had higher accuracy of exact polyp size (inmm) and higher peak learning curves. Polyps were more likely to be overestimated than underestimated, and 29.3% of size inaccuracies impacted recommended surveillance intervals. CONCLUSIONS: Our online educational module significantly improved polyp size classification. Real-time feedback appeared to be a critical component in improving accuracy. This scalable and no-cost educational module could significantly decrease under- and overutilization of colonoscopy, improving patient outcomes while increasing colonoscopy access.
Subject(s)
Clinical Competence , Colonic Polyps , Colonoscopy , Humans , Colonic Polyps/pathology , Colonic Polyps/diagnosis , Colonoscopy/education , Colonoscopy/methods , Female , Male , Formative Feedback , Learning Curve , Computer-Assisted Instruction/methods , Adult , Middle AgedABSTRACT
BACKGROUND: The learning curve in minimally invasive surgery (MIS) is lengthened compared to open surgery. It has been reported that structured feedback and training in teams of two trainees improves MIS training and MIS performance. Annotation of surgical images and videos may prove beneficial for surgical training. This study investigated whether structured feedback and video debriefing, including annotation of critical view of safety (CVS), have beneficial learning effects in a predefined, multi-modal MIS training curriculum in teams of two trainees. METHODS: This randomized-controlled single-center study included medical students without MIS experience (n = 80). The participants first completed a standardized and structured multi-modal MIS training curriculum. They were then randomly divided into two groups (n = 40 each), and four laparoscopic cholecystectomies (LCs) were performed on ex-vivo porcine livers each. Students in the intervention group received structured feedback after each LC, consisting of LC performance evaluations through tutor-trainee joint video debriefing and CVS video annotation. Performance was evaluated using global and LC-specific Objective Structured Assessments of Technical Skills (OSATS) and Global Operative Assessment of Laparoscopic Skills (GOALS) scores. RESULTS: The participants in the intervention group had higher global and LC-specific OSATS as well as global and LC-specific GOALS scores than the participants in the control group (25.5 ± 7.3 vs. 23.4 ± 5.1, p = 0.003; 47.6 ± 12.9 vs. 36 ± 12.8, p < 0.001; 17.5 ± 4.4 vs. 16 ± 3.8, p < 0.001; 6.6 ± 2.3 vs. 5.9 ± 2.1, p = 0.005). The intervention group achieved CVS more often than the control group (1. LC: 20 vs. 10 participants, p = 0.037, 2. LC: 24 vs. 8, p = 0.001, 3. LC: 31 vs. 8, p < 0.001, 4. LC: 31 vs. 10, p < 0.001). CONCLUSIONS: Structured feedback and video debriefing with CVS annotation improves CVS achievement and ex-vivo porcine LC training performance based on OSATS and GOALS scores.
Subject(s)
Cholecystectomy, Laparoscopic , Clinical Competence , Video Recording , Cholecystectomy, Laparoscopic/education , Humans , Swine , Animals , Female , Male , Learning Curve , Curriculum , Adult , Students, Medical , Formative Feedback , Young Adult , FeedbackABSTRACT
BACKGROUND: Active engagement with feedback is crucial for feedback to be effective and improve students' learning and achievement. Medical students are provided feedback on their development in the progress test (PT), which has been implemented in various medical curricula, although its format, integration and feedback differ across institutions. Existing research on engagement with feedback in the context of PT is not sufficient to make a definitive judgement on what works and which barriers exist. Therefore, we conducted an interview study to explore students' feedback use in medical progress testing. METHODS: All Dutch medical students participate in a national, curriculum-independent PT four times a year. This mandatory test, composed of multiple-choice questions, provides students with written feedback on their scores. Furthermore, an answer key is available to review their answers. Semi-structured interviews were conducted with 21 preclinical and clinical medical students who participated in the PT. Template analysis was performed on the qualitative data using a priori themes based on previous research on feedback use. RESULTS: Template analysis revealed that students faced challenges in crucial internal psychological processes that impact feedback use, including 'awareness', 'cognizance', 'agency' and 'volition'. Factors such as stakes, available time, feedback timing and feedback presentation contributed to these difficulties, ultimately hindering feedback use. Notably, feedback engagement was higher during clinical rotations, and students were interested in the feedback when seeking insights into their performance level and career perspectives. CONCLUSION: Our study enhanced the understanding of students' feedback utilisation in medical progress testing by identifying key processes and factors that impact feedback use. By recognising and addressing barriers in feedback use, we can improve both student and teacher feedback literacy, thereby transforming the PT into a more valuable learning tool.
Subject(s)
Education, Medical, Undergraduate , Educational Measurement , Qualitative Research , Students, Medical , Humans , Students, Medical/psychology , Educational Measurement/methods , Male , Female , Netherlands , Interviews as Topic , Formative Feedback , Feedback , Curriculum , Clinical CompetenceABSTRACT
Multisource feedback has long been a recommended tool to assess clinical competencies within graduate medical education. Additionally, incorporating feedback supplied by patients and other members of the healthcare team can provide the framework to bridge perspectives and viewpoints that may be different from their own. This, in effect, can aid in fortifying values in diversity, equity, and inclusivity by developing more knowledgeable, empathetic, and respectful future healthcare providers.
Subject(s)
Clinical Competence , Cultural Diversity , Education, Medical, Graduate , Humans , Clinical Competence/standards , Feedback , Internship and Residency , Formative FeedbackABSTRACT
Primary care education is a unique clinical experience for medical students. It is community-based and provides an opportunity for students to learn consultation skills with multiple sources of workplace-based feedback. Meaningful and demonstrable utilisation of this feedback by students remains an educational challenge. We showcase achievable changes to educational tasks in an established curriculum, which aim to improve student feedback literacy and create a feedback loop which improves on previous provision of unidirectional, terminal feedback. The changes have been well-received, with student and educator engagement being positive. Students have demonstrated critical reflection on feedback, and development in consultation and clinical reasoning skills.
Subject(s)
Primary Health Care , Humans , Formative Feedback , Feedback , Clinical Competence , Students, Medical/psychology , Curriculum , Education, Medical, Undergraduate/organization & administrationABSTRACT
INTRODUCTION: The development of clinical skills requires the appropriate use of self-regulated learning (SRL). Students' use of key SRL processes as they perform a clinical skill can be identified by SRL microanalysis and used to provide feedback. SRL-microanalysis feedback only on students' key SRL processes has not been previously researched for developing clinical skills. The aim of this study was to investigate whether SRL-microanalysis feedback only on students' key SRL processes can improve both their use of SRL and their clinical skill performance. METHODS: Twenty-three final year medical students with no experience in the clinical skill required for mechanical ventilation participated in this study. Key SRL processes and clinical skill performance were measured before and after SRL microanalysis feedback. RESULTS: Overall, we found an improvement in the key SRL processes of planning and monitoring of performance, with a significant difference in monitoring. We also found an increase in students' clinical skill performance. DISCUSSION: This study, which is the first in clinical skills, demonstrated that SRL microanalysis feedback only on key SRL processes can improve both students' SRL and their clinical skill performance. studies are recommended with a great number of students and across a variety of clinical skills.
Subject(s)
Clinical Competence , Students, Medical , Humans , Pilot Projects , Students, Medical/psychology , Learning , Female , Education, Medical, Undergraduate , Male , Feedback , Formative Feedback , Respiration, Artificial , Educational Measurement/methodsABSTRACT
BACKGROUND: Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS: A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS: The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS: This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Subject(s)
Education, Medical, Undergraduate , Workplace , Humans , Formative Feedback , Feedback , Health Occupations/education , LearningABSTRACT
BACKGROUND: Feedback and psychological safety are well-established concepts within medical education, vital for student learning and progress. However, the concepts remain unexplored in the context of international students. This area deserves attention given the unique challenges faced by the overseas medical students due to cultural differences. The present study examines international students' experiences of psychological safety in feedback interactions in a Scottish undergraduate medical programme. METHODS: A focused ethnographic approach was adopted to explore international students' experiences and perceptions of psychological safety in their feedback experiences. Data were collected in the form of field observations and semi-structured interviews, involving both student and faculty participants. Approximately 13hrs of fieldwork and a total of 11 interviews were conducted. These were analysed using a combination of inductive and deductive thematic analysis. RESULTS: Data analysis identified four key themes: feedback delivery, educator attributes, cultural factors and longitudinal educational relationships. Both staff and student participants highlighted how environmental factors such as room design and group size functioned as enablers or barriers to psychological safety in feedback episodes. Additionally, students appreciated tutors who expressed vulnerability and demonstrated awareness of their cultural backgrounds. Students described significant differences between the feedback approaches in the host (UK) institute and that in their home country. Longitudinal associations fostered trust and familiarity with peers and tutors, enhancing students' receptivity to learning and feedback. CONCLUSION: This present study highlights cultural differences in feedback practices across countries and their impact on psychological safety among international students. It stresses the importance of integrating overseas students by considering group dynamics, environment and diverse student needs. Staff awareness of cultural variability, openness to tutor vulnerability and fostering long-term educational relationships can greatly enhance psychological safety in learning and teaching activities. These insights are relevant amidst the growing globalisation of medical education and the mobility of students across borders, advocating for tailored integration to optimise their learning experience and achievement.
Subject(s)
Anthropology, Cultural , Education, Medical, Undergraduate , Students, Medical , Humans , Students, Medical/psychology , Female , Male , Scotland , Qualitative Research , Formative Feedback , Psychological SafetyABSTRACT
BACKGROUND: Effective feedback is fundamental in clinical education, as it allows trainers to constantly diagnose the trainees' condition, determine their weaknesses, and intervene at proper times. Recently, different feedback-based approaches have been introduced in clinical training; however, the effectiveness of such interventions still needs to be studied extensively, especially in the perioperative field. Therefore, this study sought to compare the effects of apprenticeship training using sandwich feedback and traditional methods on the perioperative competence and performance of Operating Room (OR) technology students. METHODS: Thirty final-semester undergraduate OR technology students taking the apprenticeship courses were randomly allocated into experimental (n = 15) and control (n = 15) groups through the stratified randomization approach. The students in the experimental group experienced Feedback-Based Learning (FBL) using a sandwich model, and the students in the control group participated in Traditional-Based Training (TBT) in six five-hour sessions weekly for three consecutive weeks. All students completed the Persian version of the Perceived Perioperative Competence Scale-Revised (PPCS-R) on the first and last days of interventions. Also, a blinded rater completed a checklist to evaluate all students' performance via Direct Observation of Procedural Skills (DOPS) on the last intervention day. Besides, the students in the FBL filled out a questionnaire regarding their attitude toward the implemented program. RESULTS: The mean total score of the PPCS-R was significantly higher in the FBL than in the TBT on the last intervention day (P < 0.001). Additionally, the increase in mean change of PPCS-R total score from the first to last days was significantly more in the FBL (P < 0.001). Likewise, the FBL students had higher DOPS scores than the TBT ones (P < 0.001). Most FBL students also had a good attitude toward the implemented program (n = 8; 53.3%). CONCLUSION: Apprenticeship training using a sandwich feedback-based approach was superior to the traditional method for enhancing perioperative competence and performance of final-semester OR technology students. Additional studies are required to identify the sustainability of the findings.
Subject(s)
Clinical Competence , Operating Rooms , Humans , Male , Female , Operating Rooms/standards , Formative Feedback , Young Adult , Educational MeasurementABSTRACT
BACKGROUND: Very Short Answer Questions (VSAQs) reduce cueing and simulate better real-clinical practice compared with multiple-choice questions (MCQs). While integrating them into formative exams has potential, addressing marking time and ideal occasions and items is crucial. This study gathers validity evidence of novel immediate self-feedback VSAQ (ISF-VSAQ) format and determines the optimal number of items and occasions for reliable assessment. METHODS: Ninety-four third-year pre-clinical students took two ten-item ISF-VSAQ exams on cardiovascular drugs. Each question comprised two sections: (1) Questions with space for student responses and (2) a list of possible correct answers offering partial-credit scores ranging from 0.00 to 1.00, along with self-marking and self-feedback options to indicate whether they fully, partially, or did not understand the possible answers. Messick's validity framework guided the collection of validity evidence. RESULTS: Validity evidence included five sources: (1) Content: The expert reviewed the ISF-VSAQ format, and the question was aligned with a standard examination blueprint. (2) Response process: Before starting, students received an example and guide to the ISF-VSAQ, and the teacher detailed the steps in the initial session to aid self-assessment. Unexpected answers were comprehensively reviewed by experts. (3) Internal structure: The Cronbach alphas are good for both occasions (≥ 0.70). A generalizability study revealed Phi-coefficients of 0.60, 0.71, 0.76, and 0.79 for one to four occasions with ten items, respectively. One occasion requires twenty-five items for acceptable reliability (Phi-coefficient = 0.72). (4) Relations to other variables: Inter-rater reliability between self-marking and teacher is excellent for each item (rs(186) = 0.87-0.98,p = 0.001). (5) Consequences: Path analysis revealed that the self-reflected understanding score in the second attempt directly affected the final MCQ score (ß = 0.25,p = 0.033). However, the VSAQ score did not. Regarding perceptions, over 80% of students strongly agreed/agreed that the ISF-VSAQ format enhances problem analysis, presents realistic scenarios, develops knowledge, offers feedback, and supports electronic usability. CONCLUSION: Electronic ISF-VSAQs enhanced understanding elevates learning outcomes, rendering them suitable for formative assessments with clinical scenarios. Increasing the number of occasions effectively enhances reliability. While self-marking is reliable and may reduce grading efforts, instructors should review answers to identify common student errors.
Subject(s)
Educational Measurement , Formative Feedback , Students, Medical , Humans , Educational Measurement/methods , Reproducibility of Results , Education, Medical, Undergraduate/methods , FemaleABSTRACT
BACKGROUND: Feedback is integral to medical education, enabling students to improve their knowledge, skills, and attitudes. Feedback practices may vary according to prevalent cultural and contextual factors. This study aimed to explore how feedback is conceptualized and practised in the clinical education of medical students in Sri Lanka. METHODS: The study was conducted in three medical schools and affiliated hospitals that represent the cultural diversity of Sri Lanka. Purposive sampling was utilized to recruit clinical teachers and students who would provide rich information for the study. The study had three components: an observation study, interviews with clinical teachers and focus group discussions with clinical students. During the observation study, video recording was used as a data collection tool to observe feedback in real-life clinical teaching/learning settings. A constructivist grounded theory approach was adapted for analysis to explore current practices and perceptions inductively. RESULTS: Feedback was conceptualised as spontaneous unidirectional provision of information for the improvement of students. It was often provided in public settings and in student groups. Error correction was the primary focus of feedback, but both teachers and students desired a balanced approach with reinforcement and reflection. Although the direct approach to corrective feedback was found beneficial for student learning, participants agreed that harsh feedback was to be avoided. The hierarchical culture and lack of programmed feedback in the curricula influenced feedback practices, suggesting the need for modification. CONCLUSIONS: This study highlighted feedback practices in the local context, emphasizing the need to address the hierarchical gap in clinical settings, balance reinforcement and correction, and promote dialogue and reflection in the feedback processes. The findings will help clinical teachers from both the global south as well as the global north to recognize cultural and contextual differences in providing feedback.
Subject(s)
Education, Medical, Undergraduate , Qualitative Research , Students, Medical , Humans , Sri Lanka , Students, Medical/psychology , Male , Focus Groups , Formative Feedback , Female , Feedback , Teaching , Faculty, Medical , Curriculum , Grounded TheoryABSTRACT
INTRODUCTION: The long case is used to assess medical students' proficiency in performing clinical tasks. As a formative assessment, the purpose is to offer feedback on performance, aiming to enhance and expedite clinical learning. The long case stands out as one of the primary formative assessment methods for clinical clerkship in low-resource settings but has received little attention in the literature. OBJECTIVE: To explore the experiences of medical students and faculty regarding the use of the Long Case Study as a formative assessment method at a tertiary care teaching hospital in a low-resource setting. METHODOLOGY: A qualitative study design was used. The study was conducted at Makerere University, a low-resource setting. The study participants were third- and fifth-year medical students as well as lecturers. Purposive sampling was utilized to recruit participants. Data collection comprised six Focus Group Discussions with students and five Key Informant Interviews with lecturers. The qualitative data were analyzed by inductive thematic analysis. RESULTS: Three themes emerged from the study: ward placement, case presentation, and case assessment and feedback. The findings revealed that students conduct their long cases at patients' bedside within specific wards/units assigned for the entire clerkship. Effective supervision, feedback, and marks were highlighted as crucial practices that positively impact the learning process. However, challenges such as insufficient orientation to the long case, the super-specialization of the hospital wards, pressure to hunt for marks, and inadequate feedback practices were identified. CONCLUSION: The long case offers students exposure to real patients in a clinical setting. However, in tertiary care teaching hospitals, it's crucial to ensure proper design and implementation of this practice to enable students' exposure to a variety of cases. Adequate and effective supervision and feedback create valuable opportunities for each learner to present cases and receive corrections.
Subject(s)
Clinical Clerkship , Clinical Competence , Hospitals, Teaching , Qualitative Research , Students, Medical , Humans , Students, Medical/psychology , Faculty, Medical , Focus Groups , Male , Tertiary Care Centers , Educational Measurement , Formative Feedback , Female , Education, Medical, Undergraduate/methods , Resource-Limited SettingsABSTRACT
BACKGROUND: Pathologies of the locomotor system are frequent and can cause disability and impact the quality of life of the people affected. In recent years, online training and feedback have emerged as learning tools in many fields of medicine. OBJECTIVE: This study aims to evaluate medical interns' musculoskeletal examination performance after completing an online training and feedback module. METHODS: This study employed a quasi-experimental design. Medical interns were invited to complete a 4-week musculoskeletal physical examination training and feedback module via an e-learning platform. The course included written and audiovisual content pertaining to medical history, physical examination, and specific tests for the diagnosis of the most common knee, spine, shoulder, ankle, and foot conditions. Before and after completing the module, their ability to perform the physical examination was evaluated using an objective structured clinical examination (OSCE) with simulated patients that took place face-to-face. A control group of experts was assessed using the OSCE, and their performance was compared to that of the interns before and after the training. At the end of the module feedback on the OSCE was provided to participants through the platform asynchronously and two evaluation questions about the user experience were conducted at the end of the study. RESULTS: A total of 35 subjects were assessed using the OSCE, including 29 interns and 6 experts. At the beginning of the training module, the group of interns obtained an average score of 50.6 ± 15.1. At the end of the module, 18 interns retook the OSCE, and their performance increased significantly to an average of 76.6 ± 12.8 (p < 0.01). Prior to the training, the experts performed significantly better than the interns (71.2 vs. 50.6; p = 0.01). After the interns received the training and feedback, there were no significant differences between the two groups (71.2 vs. 76.6; p = 0.43). Two evaluation questions were conducted at the end of the study, revealing that 93% of the participants affirm that the training module will be useful in their clinical practice, and 100% of the participants would recommend the training module to a colleague. CONCLUSION: The online training and feedback module enhances the musculoskeletal examination performance of medical interns.
Subject(s)
Clinical Competence , Internship and Residency , Musculoskeletal Diseases , Physical Examination , Humans , Physical Examination/standards , Female , Musculoskeletal Diseases/diagnosis , Male , Adult , Educational Measurement , Formative Feedback , Computer-Assisted Instruction/methods , Education, Distance , FeedbackABSTRACT
BACKGROUND: Medical students benefit from direct observation of clinical performance and timely feedback from multiple sources. While self and peer feedback have been the focus of numerous research studies, how they influence feedback engagement and receptivity in medical students of varying achievement levels in the clinical skills laboratory setting remains relatively unexplored. METHODS: We conducted an exploratory qualitative study to investigate students' engagement and receptivity to self and peer feedback across academic performance levels at a medical teaching institution. Data from five focus groups with third-year medical students(n = 25) were thematically analysed. RESULTS: The ways in which low and high performing students engaged with self-assessment and peer feedback were divided into three categories: affective (or interpersonal), orientational, and transformational. Lower achievers believed they lacked the necessary skills to effectively engage in self and peer feedback interventions, but they agreed with higher achievers who recognized that peer feedback combined with prior knowledge of learning objectives allowed them to take ownership of monitoring their own development over time. Learners' emotional maturity in response to feedback ratings and feedback from activities testing clinical cognition had an impact on self-regulated learning. CONCLUSIONS: Creating a trusting environment is critical for improving the acceptability of peer feedback. It is also critical to provide multiple opportunities for self-assessment in order to improve one's judgment. Giving learners the ability to actively seek and engage with feedback encourages participation in the feedback cycle, focusing on self-regulation rather than reliance on feedback providers. Feedback and action plan development can be improved by teaching students how to understand criticism, manage emotions constructively, and practice developing evaluative judgment and self-regulation skills. Based on the study findings an integrated three stage training model is recommended for effective self- and peer feedback practice for undergraduate medical education.
Subject(s)
Academic Performance , Clinical Competence , Education, Medical, Undergraduate , Focus Groups , Peer Group , Self-Assessment , Students, Medical , Humans , Students, Medical/psychology , Clinical Competence/standards , Male , Female , Qualitative Research , Formative Feedback , FeedbackABSTRACT
BACKGROUND: While communication is an essential skill for providing effective medical care, it is infrequently taught or directly assessed, limiting targeted feedback and behavior change. We sought to evaluate the impact of a multi-departmental longitudinal residency communication coaching program. We hypothesized that program implementation would result in improved confidence in residents' communication skills and higher-quality faculty feedback. METHODS: The program was implemented over a 3-year period (2019-2022) for surgery and neurology residents at a single institution. Trained faculty coaches met with assigned residents for coaching sessions. Each session included an observed clinical encounter, self-reflection, feedback, and goal setting. Eligible residents completed baseline and follow-up surveys regarding their perceptions of feedback and communication. Quantitative responses were analyzed using paired t-tests; qualitative responses were analyzed using content analysis. RESULTS: The baseline and follow-up survey response rates were 90.0% (126/140) and 50.5% (46/91), respectively. In a paired analysis of 40 respondents, residents reported greater confidence in their ability to communicate with patients (inpatient: 3.7 vs. 4.3, p < 0.001; outpatient: 3.5 vs. 4.2, p < 0.001), self-reflect (3.3 vs. 4.3, p < 0.001), and set goals (3.6 vs. 4.3, p < 0.001), as measured on a 5-point scale. Residents also reported greater usefulness of faculty feedback (3.3 vs. 4.2, p = 0.001). The content analysis revealed helpful elements of the program, challenges, and opportunities for improvement. Receiving mentorship, among others, was indicated as a core program strength, whereas solving session coordination and scheduling issues, as well as lowering the coach-resident ratio, were suggested as some of the improvement areas. CONCLUSIONS: These findings suggest that direct observation of communication in clinical encounters by trained faculty coaches can facilitate long-term trainee growth across multiple core competencies. Future studies should evaluate the impact on patient outcomes and workplace-based assessments.
Subject(s)
Communication , Internship and Residency , Mentoring , Humans , Clinical Competence , Female , Male , Program Evaluation , Formative Feedback , Feedback , Surveys and QuestionnairesABSTRACT
BACKGROUND: Laparoscopic surgery is associated with a prolonged learning curve for emerging surgeons, and simulation-based training (SBT) has become increasingly prominent in this context due to stringent working time regulations and heightened concerns regarding patient safety. While SBT offers a safe and ethical learning environment, the accuracy of simulators in the context of evaluating surgical skills remains uncertain. This study aims to assess the precision of a laparoscopic simulator with regard to evaluating surgical performance and to identify the instructor's role in SBT. MATERIALS AND METHODS: This retrospective study focused on surgical residents in their 1st through 5th years at the Department of Surgery of Linkou Chang Gung Memorial Hospital. The residents participated in a specially designed SBT program using the LapSim laparoscopic simulator. Following the training session, each resident was required to perform a laparoscopic procedure and received individualized feedback from an instructor. Both simulator and instructor evaluated trainees' performance on the LapSim, focusing on identifying correlations between the simulator's metrics and traditional assessments. RESULTS: Senior residents (n = 15), who employed more complex laparoscopic procedures, exhibited more significant improvements after receiving instructor feedback than did junior residents (n = 17). Notably, a stronger correlation between the simulator and instructor assessments was observed in the junior group (junior Global Operative Assessment of Laparoscopic Skills (GOALS) adjusted R2 = 0.285, p = 0.016), while no such correlations were observed among the senior group. CONCLUSION: A well-designed, step-by-step SBT can be a valuable tool in laparoscopic surgical training. LapSim simulator has demonstrated its potential in assessing surgical performances during the early stages of surgical training. However, instructors must provide intuitive feedback to ensure appropriate learning in later stages.
Subject(s)
Clinical Competence , Internship and Residency , Laparoscopy , Simulation Training , Tertiary Care Centers , Humans , Retrospective Studies , Laparoscopy/education , Clinical Competence/standards , Internship and Residency/standards , Formative Feedback , Male , Female , Adult , Educational MeasurementABSTRACT
OBJECTIVE: The authors sought to assess whether an Ask-Tell-Ask feedback model augmented with bidirectional feedback improves perception of feedback. METHODS: Implementation occurred on an inpatient psychiatry unit at University of North Carolina (UNC) Hospitals from July 2022 to June 2023 among attending and resident physicians and medical students. Attending physicians were educated on the Ask-Tell-Ask model and encouraged to hold weekly bidirectional feedback sessions with trainees. Surveys containing scales and free-text response were distributed by email before and after rotations to assess perception of feedback: if feedback was clearly stated, occurred on a predictable basis, included actionable goals, and fostered bidirectional feedback with attendings. For statistical analysis, survey responses were assigned numerical values of 1 (strongly disagree) to 5 (strongly agree). Differences between mean numerical correlates of responses from pre-rotation and post-rotation surveys were analyzed with unpaired t-tests; p < 0.05 indicated statistical significance. Authors independently developed themes from free-text responses, which were consolidated with themes developed by all authors. RESULTS: Mean ratings to survey items universally improved following the intervention; all p < 0.0001 and statistically significant. Pre-rotation, feedback culture was described as constrained, fraught, non-actionable, inconsistent, improving, and hierarchical. Post-rotation, the feedback culture within the UNC Department of Psychiatry was described as constructive, consistent, improving, strength-based, approachable, and nonhierarchical. CONCLUSIONS: An Ask-Tell-Ask feedback model with an added emphasis on giving and receiving feedback significantly improves perception of feedback and feedback culture.
Subject(s)
Internship and Residency , Psychiatry , Humans , Psychiatry/education , Students, Medical/psychology , Surveys and Questionnaires , North Carolina , Feedback , Formative FeedbackABSTRACT
OBJECTIVE: Feedback is a critically important tool in medical education. This pilot program applies and evaluates a competency-based approach to develop residents' skills in providing feedback to medical students. METHODS: In 2018-2019, a competency-based resident feedback skills program incorporating videorecording of skills, multi-source feedback using assessment tools with validity evidence, and sequential deliberate practice was piloted in a single-center, prospective study at the University of Rochester. Study participants included eight second-year psychiatry residents and 23 third-year clerkship students. After an introduction to foundational feedback concepts in didactic sessions, residents were videorecorded providing feedback to medical students. Recordings were reviewed with a faculty member for feedback. Skills were assessed by students who had received resident feedback, residents, and faculty utilizing a tool with validity evidence. Observations were repeated a total of three times. RESULTS: Mean feedback scores increased from 2.70 at the first feedback observation, to 2.77 at the second feedback observation, to 2.89 at the third feedback observation (maximum 3.00 points). The differences between the first and third sessions (0.19) and second and third sessions (0.12) were statistically significant (p values were < .001 and .007, with SE of 0.4 and 0.4, respectively). CONCLUSIONS: The observed competency-based feedback skills training program for residents using sequential, multi-source review and feedback was feasible and effective. Direct observation is a key component of high-quality feedback, and videorecording is an efficient methodology for observations, enabling both direct observation by the assessor and opportunity for enhanced self-assessment by residents viewing themselves in the feedback encounter.
Subject(s)
Clinical Competence , Competency-Based Education , Internship and Residency , Psychiatry , Humans , Competency-Based Education/methods , Psychiatry/education , Pilot Projects , Formative Feedback , Prospective Studies , Students, Medical , Feedback , Educational Measurement , Clinical Clerkship , Adult , Female , MaleABSTRACT
INTRODUCTION: This study aimed to develop a module that incorporates hands-on and reflective feedback in teaching dental materials science and subsequently analyse undergraduate dental students' learning experiences with the module. MATERIALS AND METHODS: The module was developed based on the ADDIE (Analyse, Design, Develop, Implement, Evaluate) model. First, a need analysis was conducted, followed by designing the module to address the needs. Next, the module sought experts' feedback and was piloted. The revised module was implemented among all second-year undergraduate dental students. Finally, a validated questionnaire (5-point Likert scale items and open-ended questions) was used to evaluate students' learning experiences. The questionnaire Likert scale items were analysed descriptively, whereas open-ended responses were analysed using content analysis. RESULTS: In the analysis phase, a slight misalignment in cognitive competency levels was observed, alongside a need for the inclusion of more hands-on activities. In the design phase, learning objectives and resources were listed. Subsequently, a module consisting of four teaching sessions (3 h each) was developed, and the pilot test showed favourable feedback. The module was then implemented in small groups of 10-12 students. In the evaluation phase, 72 students (97% response rate) completed the questionnaire. The majority of students agreed with all items, with mean scores ranging from 4.53 to 4.72. Open-ended responses highlighted that hands-on activities and reflective feedback sessions were useful. CONCLUSION: Students demonstrated positive learning experiences after participating in the module, advocating for dental educators to consider more hands-on activities and reflective feedback sessions in teaching dental materials science.