Your browser doesn't support javascript.
loading
One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification.
Graham, Simon; Vu, Quoc Dang; Jahanifar, Mostafa; Raza, Shan E Ahmed; Minhas, Fayyaz; Snead, David; Rajpoot, Nasir.
Afiliação
  • Graham S; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom. Electronic address: simon.graham@warwick.ac.uk.
  • Vu QD; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom.
  • Jahanifar M; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom.
  • Raza SEA; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom.
  • Minhas F; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom.
  • Snead D; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom.
  • Rajpoot N; Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom.
Med Image Anal ; 83: 102685, 2023 01.
Article em En | MEDLINE | ID: mdl-36410209
ABSTRACT
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Pesquisa Biomédica Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Pesquisa Biomédica Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2023 Tipo de documento: Article