Your browser doesn't support javascript.
loading
C-SATS: Assessing Surgical Skills Among Urology Residency Applicants.
Vernez, Simone L; Huynh, Victor; Osann, Kathryn; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V.
Afiliação
  • Vernez SL; 1 Department of Urology, University of California , Irvine, Orange, California.
  • Huynh V; 1 Department of Urology, University of California , Irvine, Orange, California.
  • Osann K; 2 Hematology-Oncology Division, Department of Medicine, University of California , Irvine, Orange, California.
  • Okhunov Z; 1 Department of Urology, University of California , Irvine, Orange, California.
  • Landman J; 1 Department of Urology, University of California , Irvine, Orange, California.
  • Clayman RV; 1 Department of Urology, University of California , Irvine, Orange, California.
J Endourol ; 31(S1): S95-S100, 2017 04.
Article em En | MEDLINE | ID: mdl-27633332
ABSTRACT

BACKGROUND:

We hypothesized that surgical skills assessment could aid in the selection process of medical student applicants to a surgical program. Recently, crowdsourcing has been shown to provide an accurate assessment of surgical skills at all levels of training. We compared expert and crowd assessment of surgical tasks performed by resident applicants during their interview day at the urology program at the University of California, Irvine. MATERIALS AND

METHODS:

Twenty-five resident interviewees performed four tasks open square knot tying, laparoscopic peg transfer, robotic suturing, and skill task 8 on the LAP Mentor™ (Simbionix Ltd., Lod, Israel). Faculty experts and crowd workers (Crowd-Sourced Assessment of Technical Skills [C-SATS], Seattle, WA) assessed recorded performances using the Objective Structured Assessment of Technical Skills (OSATS), Global Evaluative Assessment of Robotic Skills (GEARS), and the Global Operative Assessment of Laparoscopic Skills (GOALS) validated assessment tools.

RESULTS:

Overall, 3938 crowd assessments were obtained for the four tasks in less than 3.5 hours, whereas the average time to receive 150 expert assessments was 22 days. Inter-rater agreement between expert and crowd assessment scores was 0.62 for open knot tying, 0.92 for laparoscopic peg transfer, and 0.86 for robotic suturing. Agreement between applicant rank on skill task 8 on the LAP Mentor assessment and crowd assessment was 0.32. The crowd match rank based solely on skills performance did not compare well with the final faculty match rank list (0.46); however, none of the bottom five crowd-rated applicants appeared in the top five expert-rated applicants and none of the top five crowd-rated applicants appeared in the bottom five expert-rated applicants.

CONCLUSIONS:

Crowd-source assessment of resident applicant surgical skills has good inter-rater agreement with expert physician raters but not with a computer-based objective motion metrics software assessment. Overall applicant rank was affected to some degree by the crowd performance rating.
Assuntos
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Assunto principal: Seleção de Pessoal / Estudantes de Medicina / Urologia / Competência Clínica / Internato e Residência Tipo de estudo: Qualitative_research Limite: Humans Idioma: En Revista: J Endourol Assunto da revista: UROLOGIA Ano de publicação: 2017 Tipo de documento: Article

Texto completo: 1 Bases de dados: MEDLINE Assunto principal: Seleção de Pessoal / Estudantes de Medicina / Urologia / Competência Clínica / Internato e Residência Tipo de estudo: Qualitative_research Limite: Humans Idioma: En Revista: J Endourol Assunto da revista: UROLOGIA Ano de publicação: 2017 Tipo de documento: Article