Your browser doesn't support javascript.
loading
Deep Interactive Learning-based ovarian cancer segmentation of H&E-stained whole slide images to study morphological patterns of BRCA mutation.
Ho, David Joon; Chui, M Herman; Vanderbilt, Chad M; Jung, Jiwon; Robson, Mark E; Park, Chan-Sik; Roh, Jin; Fuchs, Thomas J.
Afiliação
  • Ho DJ; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
  • Chui MH; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
  • Vanderbilt CM; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
  • Jung J; Department of Pathology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
  • Robson ME; Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
  • Park CS; Department of Pathology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
  • Roh J; Department of Pathology, Ajou University School of Medicine, Suwon, Republic of Korea.
  • Fuchs TJ; Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
J Pathol Inform ; 14: 100160, 2023.
Article em En | MEDLINE | ID: mdl-36536772
Deep learning has been widely used to analyze digitized hematoxylin and eosin (H&E)-stained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixel-wise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its time-consuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and non-cancer regions on giga-pixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and training/finetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted high-grade serous ovarian cancer patches, we attempted to train an additional classification deep learning model to predict BRCA mutation. The segmentation model and code have been released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Revista: J Pathol Inform Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Revista: J Pathol Inform Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos