Your browser doesn't support javascript.
loading
Evaluating robotic-assisted surgery training videos with multi-task convolutional neural networks.
Wang, Yihao; Dai, Jessica; Morgan, Tara N; Elsaied, Mohamed; Garbens, Alaina; Qu, Xingming; Steinberg, Ryan; Gahan, Jeffrey; Larson, Eric C.
Afiliação
  • Wang Y; Department of Computer Science, Southern Methodist University, Dallas, USA.
  • Dai J; Department of Urology, University of Texas Southwestern Medical Center, Dallas, USA.
  • Morgan TN; Department of Urology, University of Texas Southwestern Medical Center, Dallas, USA.
  • Elsaied M; Department of Computer Science, Southern Methodist University, Dallas, USA.
  • Garbens A; Department of Urology, University of Texas Southwestern Medical Center, Dallas, USA.
  • Qu X; Department of Computer Science, Southern Methodist University, Dallas, USA.
  • Steinberg R; Department of Urology, University of Texas Southwestern Medical Center, Dallas, USA.
  • Gahan J; Department of Urology, University of Texas Southwestern Medical Center, Dallas, USA.
  • Larson EC; Department of Computer Science, Southern Methodist University, Dallas, USA. eclarson@smu.edu.
J Robot Surg ; 16(4): 917-925, 2022 Aug.
Article em En | MEDLINE | ID: mdl-34709538
ABSTRACT
We seek to understand if an automated algorithm can replace human scoring of surgical trainees performing the urethrovesical anastomosis in radical prostatectomy with synthetic tissue. Specifically, we investigate neural networks for predicting the surgical proficiency score (GEARS score) from video clips. We evaluate videos of surgeons performing the urethral anastomosis using synthetic tissue. The algorithm tracks surgical instrument locations from video, saving the positions of key points on the instruments over time. These positional features are used to train a multi-task convolutional network to infer each sub-category of the GEARS score to determine the proficiency level of trainees. Experimental results demonstrate that the proposed method achieves good performance with scores matching manual inspection in 86.1% of all GEARS sub-categories. Furthermore, the model can detect the difference between proficiency (novice to expert) in 83.3% of videos. Evaluation of GEARS sub-categories with artificial neural networks is possible for novice and intermediate surgeons, but additional research is needed to understand if expert surgeons can be evaluated with a similar automated system.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Procedimentos Cirúrgicos Robóticos / Cirurgiões Tipo de estudo: Prognostic_studies Limite: Humans / Male Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Procedimentos Cirúrgicos Robóticos / Cirurgiões Tipo de estudo: Prognostic_studies Limite: Humans / Male Idioma: En Ano de publicação: 2022 Tipo de documento: Article