Your browser doesn't support javascript.
loading
A visual-language foundation model for computational pathology.
Lu, Ming Y; Chen, Bowen; Williamson, Drew F K; Chen, Richard J; Liang, Ivy; Ding, Tong; Jaume, Guillaume; Odintsov, Igor; Le, Long Phi; Gerber, Georg; Parwani, Anil V; Zhang, Andrew; Mahmood, Faisal.
Affiliation
  • Lu MY; Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Chen B; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
  • Williamson DFK; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
  • Chen RJ; Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
  • Liang I; Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.
  • Ding T; Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Jaume G; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
  • Odintsov I; Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Le LP; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
  • Gerber G; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
  • Parwani AV; Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Zhang A; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
  • Mahmood F; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
Nat Med ; 30(3): 863-874, 2024 Mar.
Article in En | MEDLINE | ID: mdl-38504017
ABSTRACT
The accelerated adoption of digital pathology and advances in deep learning have enabled the development of robust models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain, and a model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text and, notably, over 1.17 million image-caption pairs through task-agnostic pretraining. Evaluated on a suite of 14 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving histopathology images and/or text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, and text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Machine Learning / Language Limits: Humans Language: En Journal: Nat Med Journal subject: BIOLOGIA MOLECULAR / MEDICINA Year: 2024 Document type: Article Affiliation country:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Machine Learning / Language Limits: Humans Language: En Journal: Nat Med Journal subject: BIOLOGIA MOLECULAR / MEDICINA Year: 2024 Document type: Article Affiliation country:
...