Your browser doesn't support javascript.
loading
An artificial intelligence model that automatically labels roux-en-Y gastric bypasses, a comparison to trained surgeon annotators.
Fer, Danyal; Zhang, Bokai; Abukhalil, Rami; Goel, Varun; Goel, Bharti; Barker, Jocelyn; Kalesan, Bindu; Barragan, Irene; Gaddis, Mary Lynn; Kilroy, Pablo Garcia.
Afiliação
  • Fer D; University of California, San Francisco-East Bay, General Surgery, Oakland, CA, USA.
  • Zhang B; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
  • Abukhalil R; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
  • Goel V; Johnson & Johnson MedTech, New Brunswick, NJ, USA. rabukhal@its.jnj.com.
  • Goel B; , 5490 Great America Parkway, Santa Clara, CA, 95054, USA. rabukhal@its.jnj.com.
  • Barker J; University of California, San Francisco-East Bay, General Surgery, Oakland, CA, USA.
  • Kalesan B; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
  • Barragan I; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
  • Gaddis ML; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
  • Kilroy PG; Johnson & Johnson MedTech, New Brunswick, NJ, USA.
Surg Endosc ; 37(7): 5665-5672, 2023 07.
Article em En | MEDLINE | ID: mdl-36658282
ABSTRACT

INTRODUCTION:

Artificial intelligence (AI) can automate certain tasks to improve data collection. Models have been created to annotate the steps of Roux-en-Y Gastric Bypass (RYGB). However, model performance has not been compared with individual surgeon annotator performance. We developed a model that automatically labels RYGB steps and compares its performance to surgeons. METHODS AND PROCEDURES 545 videos (17 surgeons) of laparoscopic RYGB procedures were collected. An annotation guide (12 steps, 52 tasks) was developed. Steps were annotated by 11 surgeons. Each video was annotated by two surgeons and a third reconciled the differences. A convolutional AI model was trained to identify steps and compared with manual annotation. For modeling, we used 390 videos for training, 95 for validation, and 60 for testing. The performance comparison between AI model versus manual annotation was performed using ANOVA (Analysis of Variance) in a subset of 60 testing videos. We assessed the performance of the model at each step and poor performance was defined (F1-score < 80%).

RESULTS:

The convolutional model identified 12 steps in the RYGB architecture. Model performance varied at each step [F1 > 90% for 7, and > 80% for 2]. The reconciled manual annotation data (F1 > 80% for > 5 steps) performed better than trainee's (F1 > 80% for 2-5 steps for 4 annotators, and < 2 steps for 4 annotators). In testing subset, certain steps had low performance, indicating potential ambiguities in surgical landmarks. Additionally, some videos were easier to annotate than others, suggesting variability. After controlling for variability, the AI algorithm was comparable to the manual (p < 0.0001).

CONCLUSION:

AI can be used to identify surgical landmarks in RYGB comparable to the manual process. AI was more accurate to recognize some landmarks more accurately than surgeons. This technology has the potential to improve surgical training by assessing the learning curves of surgeons at scale.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Obesidade Mórbida / Derivação Gástrica / Laparoscopia / Cirurgiões Tipo de estudo: Observational_studies / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Obesidade Mórbida / Derivação Gástrica / Laparoscopia / Cirurgiões Tipo de estudo: Observational_studies / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article