Your browser doesn't support javascript.
loading
Few-shot Tumor Bud Segmentation Using Generative Model in Colorectal Carcinoma.
Su, Ziyu; Chen, Wei; Leigh, Preston J; Sajjad, Usama; Niu, Shuo; Rezapour, Mostafa; Frankel, Wendy L; Gurcan, Metin N; Niazi, M Khalid Khan.
Afiliación
  • Su Z; Center for Artificial Intelligence Research, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
  • Chen W; Department of Pathology, The Ohio State University.
  • Leigh PJ; Department of Biomedical Engineering, The University of Arizona.
  • Sajjad U; Center for Artificial Intelligence Research, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
  • Niu S; Department of Pathology, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
  • Rezapour M; Center for Artificial Intelligence Research, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
  • Frankel WL; Department of Pathology, The Ohio State University.
  • Gurcan MN; Center for Artificial Intelligence Research, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
  • Niazi MKK; Center for Artificial Intelligence Research, Wake Forest University, School of Medicine, Winston-Salem, NC, USA.
Article en En | MEDLINE | ID: mdl-38756441
ABSTRACT
Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc SPIE Int Soc Opt Eng Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc SPIE Int Soc Opt Eng Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos