Reducing annotating load: Active learning with synthetic images in surgical instrument segmentation.
Med Image Anal
; 97: 103246, 2024 Oct.
Article
en En
| MEDLINE
| ID: mdl-38943835
ABSTRACT
Accurate instrument segmentation in the endoscopic vision of minimally invasive surgery is challenging due to complex instruments and environments. Deep learning techniques have shown competitive performance in recent years. However, deep learning usually requires a large amount of labeled data to achieve accurate prediction, which poses a significant workload. To alleviate this workload, we propose an active learning-based framework to generate synthetic images for efficient neural network training. In each active learning iteration, a small number of informative unlabeled images are first queried by active learning and manually labeled. Next, synthetic images are generated based on these selected images. The instruments and backgrounds are cropped out and randomly combined with blending and fusion near the boundary. The proposed method leverages the advantage of both active learning and synthetic images. The effectiveness of the proposed method is validated on two sinus surgery datasets and one intraabdominal surgery dataset. The results indicate a considerable performance improvement, especially when the size of the annotated dataset is small. All the code is open-sourced at https//github.com/HaonanPeng/active_syn_generator.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Banco de datos:
MEDLINE
Asunto principal:
Aprendizaje Profundo
Límite:
Humans
Idioma:
En
Revista:
Med Image Anal
Asunto de la revista:
DIAGNOSTICO POR IMAGEM
Año:
2024
Tipo del documento:
Article