Your browser doesn't support javascript.
loading
Explainable nucleus classification using Decision Tree Approximation of Learned Embeddings.
Amgad, Mohamed; Atteya, Lamees A; Hussein, Hagar; Mohammed, Kareem Hosny; Hafiz, Ehab; Elsebaie, Maha A T; Mobadersany, Pooya; Manthey, David; Gutman, David A; Elfandy, Habiba; Cooper, Lee A D.
Affiliation
  • Amgad M; Department of Pathology, Northwestern University, Chicago, IL, USA.
  • Atteya LA; Egyptian Ministry of Health, Cairo, Egypt.
  • Hussein H; Department of Pathology, Nasser Institute for Research and Treatment, Cairo, Egypt.
  • Mohammed KH; Department of Pathology and Laboratory Medicine, University of Pennsylvania, Philadelphia, PA, USA.
  • Hafiz E; Department of Clinical Laboratory Research, Theodor Bilharz Research Institute, Giza, Egypt.
  • Elsebaie MAT; Department of Medicine, Cook County Hospital, Chicago, IL, USA.
  • Mobadersany P; Department of Pathology, Northwestern University, Chicago, IL, USA.
  • Manthey D; Kitware Inc., Clifton Park, NY, USA.
  • Gutman DA; Department of Neurology, Emory University, Atlanta, GA, USA.
  • Elfandy H; Department of Pathology, National Cancer Institute, Cairo, Egypt.
  • Cooper LAD; Department of Pathology, Northwestern University, Chicago, IL, USA.
Bioinformatics ; 38(2): 513-519, 2022 01 03.
Article in En | MEDLINE | ID: mdl-34586355
MOTIVATION: Nucleus detection, segmentation and classification are fundamental to high-resolution mapping of the tumor microenvironment using whole-slide histopathology images. The growing interest in leveraging the power of deep learning to achieve state-of-the-art performance often comes at the cost of explainability, yet there is general consensus that explainability is critical for trustworthiness and widespread clinical adoption. Unfortunately, current explainability paradigms that rely on pixel saliency heatmaps or superpixel importance scores are not well-suited for nucleus classification. Techniques like Grad-CAM or LIME provide explanations that are indirect, qualitative and/or nonintuitive to pathologists. RESULTS: In this article, we present techniques to enable scalable nuclear detection, segmentation and explainable classification. First, we show how modifications to the widely used Mask R-CNN architecture, including decoupling the detection and classification tasks, improves accuracy and enables learning from hybrid annotation datasets like NuCLS, which contain mixtures of bounding boxes and segmentation boundaries. Second, we introduce an explainability method called Decision Tree Approximation of Learned Embeddings (DTALE), which provides explanations for classification model behavior globally, as well as for individual nuclear predictions. DTALE explanations are simple, quantitative, and can flexibly use any measurable morphological features that make sense to practicing pathologists, without sacrificing model accuracy. Together, these techniques present a step toward realizing the promise of computational pathology in computer-aided diagnosis and discovery of morphologic biomarkers. AVAILABILITY AND IMPLEMENTATION: Relevant code can be found at github.com/CancerDataScience/NuCLS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cell Nucleus Type of study: Clinical_trials / Health_economic_evaluation / Prognostic_studies / Qualitative_research Language: En Journal: Bioinformatics Journal subject: INFORMATICA MEDICA Year: 2022 Document type: Article Affiliation country: Country of publication:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cell Nucleus Type of study: Clinical_trials / Health_economic_evaluation / Prognostic_studies / Qualitative_research Language: En Journal: Bioinformatics Journal subject: INFORMATICA MEDICA Year: 2022 Document type: Article Affiliation country: Country of publication: