Your browser doesn't support javascript.
loading
A comparative analysis of video-based surgical assessment: is evaluation of the entire critical portion of the operation necessary?
Barnhill, Caleb W; Kaplan, Stephen J; Alseidi, Adnan A; Deal, Shanley B.
Afiliação
  • Barnhill CW; Virginia Mason Medical Center, 1100 9th Ave., C6-GS, Seattle, WA, 98101, USA. Caleb.barnhill@virginiamason.org.
  • Kaplan SJ; Virginia Mason Medical Center, 1100 9th Ave., C6-GS, Seattle, WA, 98101, USA.
  • Alseidi AA; Virginia Mason Medical Center, 1100 9th Ave., C6-GS, Seattle, WA, 98101, USA.
  • Deal SB; Virginia Mason Medical Center, 1100 9th Ave., C6-GS, Seattle, WA, 98101, USA.
Surg Endosc ; 36(9): 6719-6723, 2022 09.
Article em En | MEDLINE | ID: mdl-35146556
ABSTRACT

BACKGROUND:

Previous studies of video-based operative assessments using crowd sourcing have established the efficacy of non-expert evaluations. Our group sought to establish the equivalence of abbreviating video content for operative assessment.

METHODS:

A single institution video repository of six core general surgery operations was submitted for evaluation. Each core surgery included three unique surgical performances, totaling 18 unique operative videos. Each video was edited using four different protocols based on the critical portion of the operation (1) custom edited critical portion (2) condensed critical portion (3) first 20 s of every minute of the critical portion, and (4) first 10 s of every minute of the critical portion. In total, 72 individually edited operative videos were submitted to the C-SATS (Crowd-Sourced Assessment of Technical Skills) platform (C-SATS) for evaluation. Aggregate score for study protocol was compared using the Kruskal-Wallis test. A multivariable, multilevel mixed-effects model was constructed to predict total skill assessment scores.

RESULTS:

Median video lengths for each protocol were custom, 620 (IQR 527-728); condensed, 1035 (850-1206); 10 s, 435 (211-609); and 20 s, 909 (420-1214). There was no difference in aggregate median score among the four study protocols custom, 15.7 (14.4-16.2); condensed, 15.8 (15.2-16.4); 10 s, 15.8 (15.3-16.1); 20 s, 16.0 (15.1-16.3); χ2 = 1.661, p = 0.65. Regression modeling demonstrated a significant, but minimal effect of the 10 s and 20 s editing protocols compared to the custom method on individual video score condensed, + 0.33 (- 0.05-0.70), p = 0.09; 10 s, + 0.29 (0.04-0.55), p = 0.03; 20 s, + 0.40 (0.15-0.66), p = 0.002.

CONCLUSION:

A standardized protocol for video editing abbreviated surgical performances yields reproducible assessment of surgical aptitude when assessed by non-experts.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Competência Clínica / Crowdsourcing Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Competência Clínica / Crowdsourcing Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article