Your browser doesn't support javascript.
loading
Uncertainty-based Active Learning by Bayesian U-Net for Multi-label Cone-beam CT Segmentation.
Huang, Jiayu; Farpour, Nazbanoo; Yang, Bingjian J; Mupparapu, Muralidhar; Lure, Fleming; Li, Jing; Yan, Hao; Setzer, Frank C.
Afiliación
  • Huang J; School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona.
  • Farpour N; Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania.
  • Yang BJ; Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania.
  • Mupparapu M; Department of Oral Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
  • Lure F; MS Technologies Corporation, Rockville, Maryland.
  • Li J; School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia.
  • Yan H; School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona.
  • Setzer FC; Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania. Electronic address: fsetzer@upenn.edu.
J Endod ; 50(2): 220-228, 2024 Feb.
Article en En | MEDLINE | ID: mdl-37979653
INTRODUCTION: Training of Artificial Intelligence (AI) for biomedical image analysis depends on large annotated datasets. This study assessed the efficacy of Active Learning (AL) strategies training AI models for accurate multilabel segmentation and detection of periapical lesions in cone-beam CTs (CBCTs) using a limited dataset. METHODS: Limited field-of-view CBCT volumes (n = 20) were segmented by clinicians (clinician segmentation [CS]) and Bayesian U-Net-based AL strategies. Two AL functions, Bayesian Active Learning by Disagreement [BALD] and Max_Entropy [ME], were used for multilabel segmentation ("Lesion"-"Tooth Structure"-"Bone"-"Restorative Materials"-"Background"), and compared to a non-AL benchmark Bayesian U-Net function. The training-to-testing set ratio was 4:1. Comparisons between the AL and Bayesian U-Net functions versus CS were made by evaluating the segmentation accuracy with the Dice indices and lesion detection accuracy. The Kruskal-Wallis test was used to assess statistically significant differences. RESULTS: The final training set contained 26 images. After 8 AL iterations, lesion detection sensitivity was 84.0% for BALD, 76.0% for ME, and 32.0% for Bayesian U-Net, which was significantly different (P < .0001; H = 16.989). The mean Dice index for all labels was 0.680 ± 0.155 for Bayesian U-Net and 0.703 ± 0.166 for ME after eight AL iterations, compared to 0.601 ± 0.267 for Bayesian U-Net over the mean of all iterations. The Dice index for "Lesion" was 0.504 for BALD and 0.501 for ME after 8 AL iterations, and at a maximum 0.288 for Bayesian U-Net. CONCLUSIONS: Both AL strategies based on uncertainty quantification from Bayesian U-Net BALD, and ME, provided improved segmentation and lesion detection accuracy for CBCTs. AL may contribute to reducing extensive labeling needs for training AI algorithms for biomedical image analysis in dentistry.
Asunto(s)
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Algoritmos / Inteligencia Artificial Idioma: En Revista: J Endod Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Algoritmos / Inteligencia Artificial Idioma: En Revista: J Endod Año: 2024 Tipo del documento: Article