Your browser doesn't support javascript.
loading
Reducing annotating load: Active learning with synthetic images in surgical instrument segmentation.
Peng, Haonan; Lin, Shan; King, Daniel; Su, Yun-Hsuan; Abuzeid, Waleed M; Bly, Randall A; Moe, Kris S; Hannaford, Blake.
Afiliación
  • Peng H; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA. Electronic address: penghn@uw.edu.
  • Lin S; University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA.
  • King D; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
  • Su YH; Mount Holyoke College, 50 College St, South Hadley, MA 01075, USA.
  • Abuzeid WM; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
  • Bly RA; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
  • Moe KS; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
  • Hannaford B; University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
Med Image Anal ; 97: 103246, 2024 Oct.
Article en En | MEDLINE | ID: mdl-38943835
ABSTRACT
Accurate instrument segmentation in the endoscopic vision of minimally invasive surgery is challenging due to complex instruments and environments. Deep learning techniques have shown competitive performance in recent years. However, deep learning usually requires a large amount of labeled data to achieve accurate prediction, which poses a significant workload. To alleviate this workload, we propose an active learning-based framework to generate synthetic images for efficient neural network training. In each active learning iteration, a small number of informative unlabeled images are first queried by active learning and manually labeled. Next, synthetic images are generated based on these selected images. The instruments and backgrounds are cropped out and randomly combined with blending and fusion near the boundary. The proposed method leverages the advantage of both active learning and synthetic images. The effectiveness of the proposed method is validated on two sinus surgery datasets and one intraabdominal surgery dataset. The results indicate a considerable performance improvement, especially when the size of the annotated dataset is small. All the code is open-sourced at https//github.com/HaonanPeng/active_syn_generator.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Aprendizaje Profundo Límite: Humans Idioma: En Revista: Med Image Anal Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Aprendizaje Profundo Límite: Humans Idioma: En Revista: Med Image Anal Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2024 Tipo del documento: Article