Your browser doesn't support javascript.
loading
Merlin: A Vision Language Foundation Model for 3D Computed Tomography.
Blankemeier, Louis; Cohen, Joseph Paul; Kumar, Ashwin; Van Veen, Dave; Gardezi, Syed Jamal Safdar; Paschali, Magdalini; Chen, Zhihong; Delbrouck, Jean-Benoit; Reis, Eduardo; Truyts, Cesar; Bluethgen, Christian; Jensen, Malte Engmann Kjeldskov; Ostmeier, Sophie; Varma, Maya; Valanarasu, Jeya Maria Jose; Fang, Zhongnan; Huo, Zepeng; Nabulsi, Zaid; Ardila, Diego; Weng, Wei-Hung; Amaro, Edson; Ahuja, Neera; Fries, Jason; Shah, Nigam H; Johnston, Andrew; Boutin, Robert D; Wentland, Andrew; Langlotz, Curtis P; Hom, Jason; Gatidis, Sergios; Chaudhari, Akshay S.
Afiliação
  • Blankemeier L; Department of Electrical Engineering, Stanford University.
  • Cohen JP; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Kumar A; Department of Radiology, Stanford University.
  • Van Veen D; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Gardezi SJS; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Paschali M; Department of Radiology, Stanford University.
  • Chen Z; Department of Electrical Engineering, Stanford University.
  • Delbrouck JB; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Reis E; Department of Radiology, Stanford University.
  • Truyts C; Department of Radiology, University of Wisconsin-Madison.
  • Bluethgen C; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Jensen MEK; Department of Radiology, Stanford University.
  • Ostmeier S; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Varma M; Department of Radiology, Stanford University.
  • Valanarasu JMJ; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Fang Z; Department of Radiology, Stanford University.
  • Huo Z; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Nabulsi Z; Department of Radiology, Stanford University.
  • Ardila D; Department of Radiology, Hospital Israelita Albert Einstein.
  • Weng WH; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Amaro E; Department of Radiology, University Hospital Zurich.
  • Ahuja N; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Fries J; Department of Radiology, Stanford University.
  • Shah NH; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Johnston A; Department of Radiology, Stanford University.
  • Boutin RD; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Wentland A; Department of Radiology, Stanford University.
  • Langlotz CP; Department of Computer Science, Stanford University.
  • Hom J; Stanford Center for Artificial Intelligence in Medicine and Imaging, Stanford University.
  • Gatidis S; Department of Radiology, Stanford University.
  • Chaudhari AS; Department of Computer Science, Stanford University.
Res Sq ; 2024 Jun 28.
Article em En | MEDLINE | ID: mdl-38978576
ABSTRACT
Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Res Sq Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Res Sq Ano de publicação: 2024 Tipo de documento: Article