Your browser doesn't support javascript.
loading
Cluster-based histopathology phenotype representation learning by self-supervised multi-class-token hierarchical ViT.
Ye, Jiarong; Kalra, Shivam; Miri, Mohammad Saleh.
Afiliación
  • Ye J; Roche Diagnostics Solutions, Santa Clara, CA, USA.
  • Kalra S; Roche Diagnostics Solutions, Santa Clara, CA, USA. Shivam.Kalra@roche.com.
  • Miri MS; Roche Diagnostics Solutions, Santa Clara, CA, USA.
Sci Rep ; 14(1): 3202, 2024 02 08.
Article en En | MEDLINE | ID: mdl-38331955
ABSTRACT
Developing a clinical AI model necessitates a significant amount of highly curated and carefully annotated dataset by multiple medical experts, which results in increased development time and costs. Self-supervised learning (SSL) is a method that enables AI models to leverage unlabelled data to acquire domain-specific background knowledge that can enhance their performance on various downstream tasks. In this work, we introduce CypherViT, a cluster-based histo-pathology phenotype representation learning by self-supervised multi-class-token hierarchical Vision Transformer (ViT). CypherViT is a novel backbone that can be integrated into a SSL pipeline, accommodating both coarse and fine-grained feature learning for histopathological images via a hierarchical feature agglomerative attention module with multiple classification (cls) tokens in ViT. Our qualitative analysis showcases that our approach successfully learns semantically meaningful regions of interest that align with morphological phenotypes. To validate the model, we utilize the DINO self-supervised learning (SSL) framework to train CypherViT on a substantial dataset of unlabeled breast cancer histopathological images. This trained model proves to be a generalizable and robust feature extractor for colorectal cancer images. Notably, our model demonstrates promising performance in patch-level tissue phenotyping tasks across four public datasets. The results from our quantitative experiments highlight significant advantages over existing state-of-the-art SSL models and traditional transfer learning methods, such as those relying on ImageNet pre-training.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Suministros de Energía Eléctrica / Automanejo Tipo de estudio: Qualitative_research Límite: Humans Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Suministros de Energía Eléctrica / Automanejo Tipo de estudio: Qualitative_research Límite: Humans Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos