Your browser doesn't support javascript.
loading
Synergizing Deep Learning-Enabled Preprocessing and Human-AI Integration for Efficient Automatic Ground Truth Generation.
Collazo, Christopher; Vargas, Ian; Cara, Brendon; Weinheimer, Carla J; Grabau, Ryan P; Goldgof, Dmitry; Hall, Lawrence; Wickline, Samuel A; Pan, Hua.
Afiliación
  • Collazo C; College of Engineering, University of South Florida, Tampa, FL 33620, USA.
  • Vargas I; The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA.
  • Cara B; The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA.
  • Weinheimer CJ; Department of Medicine, Washington University in St. Louis, St. Louis, MO 63110, USA.
  • Grabau RP; The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA.
  • Goldgof D; College of Engineering, University of South Florida, Tampa, FL 33620, USA.
  • Hall L; College of Engineering, University of South Florida, Tampa, FL 33620, USA.
  • Wickline SA; The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA.
  • Pan H; Department of Medicine, Washington University in St. Louis, St. Louis, MO 63110, USA.
Bioengineering (Basel) ; 11(5)2024 Apr 28.
Article en En | MEDLINE | ID: mdl-38790302
ABSTRACT
The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model's effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2024 Tipo del documento: Article