Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Surg Educ ; 77(6): e187-e195, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32600891

RESUMO

OBJECTIVE: In surgery residency programs, Accreditation Council for Graduate Medical Education mandated performance assessment can include assessment in the operating room to demonstrate that necessary quality and autonomy goals are achieved by the conclusion of training. For the past 3 years, our institution has used The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) instrument to assess and track operative skills. Evaluation is accomplished in near real-time using a secure web-based platform for data management and analytics (Firefly). Simultaneous to access of the platform's case logging function, the O-SCORE instrument is delivered to faculty members for rapid completion, facilitating quality, and timeliness of feedback. We sought to demonstrate the platform's utility in detecting operative performance changes over time in response to focused educational interventions based on stored case log and O-SCORE data. DESIGN: Stored resident performance assessments for the most frequently performed laparoscopic procedures (cholecystectomy, appendectomy, inguinal hernia repair, ventral hernia repair) were examined for 3 successive academic years (2016-2019). During this time, 4 of 36 residents had received program-assigned supplemental simulation training to improve laparoscopic skills. O-SCORE data for these residents were extracted from peer data, which were used for comparisons. Assigned training consisted of a range of videoscopic and virtual reality skills drills with performance objectives. O-SCORE responses were converted to integers and autonomy scores for items pertaining to technical skill were compared before and after educational interventions (Student's t-tests). These scores were also compared to aggregate scores in the nonintervention group. Bayesian-modeled learning curves were used to characterize patterns of improvement over time. SETTING: University of Massachusetts Medical School-Baystate Surgery Residency and Baystate Medical Center PARTICIPANTS: General surgery residents (n = 36) RESULTS: During the period of review, 3325 resident cases were identified meeting the case type criteria. As expected, overall autonomy increased with the number of cases performed. The 4 residents who had been assigned supplemental training (6-18 months) had preintervention score averages that were lower than that of the nonintervention group (2.25 ± 0.43 vs 3.57 ± 1.02; p < 0.0001). During the respective intervention periods, all 4 residents improved autonomy scores (increase to 3.40 ± 0.61; p < 0.0001). Similar improvements were observed for tissue handling, instrument handling, bimanual dexterity, visuospatial skill, and operative efficiency component skills. Postintervention scores were not significantly different compared to scores for the non-intervention group. Bayesian-modeled learning curves showed a similar pattern of postintervention performance improvement. CONCLUSIONS: The data management platform proved to be an effective tool to track responses to supplemental training that was deemed necessary to close defined skills gaps in laparoscopic surgery. This could be seen both in individual and in aggregated data. We were gratified that at the conclusion of the supplemental training, O-SCORE results for the intervention group were not different than those seen in the non-intervention group.


Assuntos
Cirurgia Geral , Internato e Residência , Teorema de Bayes , Competência Clínica , Gerenciamento de Dados , Educação de Pós-Graduação em Medicina , Avaliação Educacional , Cirurgia Geral/educação , Humanos , Internet
2.
J Surg Educ ; 76(6): e209-e216, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31515199

RESUMO

OBJECTIVE: The purpose of this study was to determine whether an automated platform for evaluation selection and delivery would increase participation from surgical teaching faculty in submitting resident operative performance evaluations. DESIGN: We built a HIPAA-compliant, web-based platform to track resident operative assignments and to link embedded evaluation instruments to procedure type. The platform matched appropriate evaluations to surgeons' scheduled procedures, and delivered multiple evaluation types, including Ottawa Surgical Competency Operating Room Evaluation (O-Score) evaluations and Operative Performance Rating System (OPRS) evaluations. Prompts to complete evaluations were made through a system of automatic electronic notifications. We compared the time spent in the platform to achieve evaluation completion. As a metric for the platform's effect on faculty participation, we considered a task that would typically be infeasible without workflow optimization: the evaluator could choose to complete multiple, complementary evaluations for the same resident in the same case. For those cases with multiple evaluations, correlation was analyzed by Spearman rank test. Evaluation data were compared between PGY levels using repeated measures ANOVA. SETTING: The study took place at 4 general surgery residency programs: The University of Massachusetts Medical School-Baystate, the University of Connecticut School or Medicine, the University of Iowa Carver College of Medicine, and Maimonides Medical Center. PARTICIPANTS: From March 2017 to February 2019, the study included 70 surgical teaching faculty and 101 general surgery residents. RESULTS: Faculty completed 1230 O-Score evaluations and 106 OPRS evaluations. Evaluations were completed quickly, with a median time of 36 ± 18 seconds for O-Score evaluations, and 53 ± 51 seconds for OPRS evaluations. 89% of O-Score and 55% of OPRS evaluations were completed without optional comments within one minute, and 99% of O-Score and 82% of OPRS evaluations were completed within 2 minutes. For cases eligible for both evaluation types, attendings completed both evaluations on 74 of 221 (33%) of these cases. These paired evaluations strongly correlated on resident performance (Spearman coefficient = 0.84, p < 0.00001). Both evaluation types stratified operative skill level by program year (p < 0.00001). CONCLUSIONS: Evaluation initiatives can be hampered by the challenge of making multiple surgical evaluation instruments available when needed for appropriate clinical situations, including specific case types. As a test of the optimized evaluation workflow, and to lay the groundwork for future data-driven design of evaluations, we tested the impact of simultaneously delivering 2 evaluation instruments via a secure web-based education platform. We measured the evaluation completion rates of faculty surgeon evaluators when rating resident operative performance, and how effectively the results of evaluation could be analyzed and compared, taking advantage of a highly integrated management of the evaluative information.


Assuntos
Competência Clínica , Avaliação Educacional/métodos , Cirurgia Geral/educação , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Feedback Formativo , Humanos , Internet , Internato e Residência , Análise e Desempenho de Tarefas , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA