Your browser doesn't support javascript.
loading
Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography.
Liu, Jie; Zhang, Yixiao; Wang, Kang; Yavuz, Mehmet Can; Chen, Xiaoxi; Yuan, Yixuan; Li, Haoliang; Yang, Yang; Yuille, Alan; Tang, Yucheng; Zhou, Zongwei.
Afiliação
  • Liu J; City University of Hong Kong, Hong Kong.
  • Zhang Y; Johns Hopkins University, United States of America.
  • Wang K; University of California, San Francisco, United States of America.
  • Yavuz MC; University of California, San Francisco, United States of America.
  • Chen X; University of Illinois Urbana-Champaign, United States of America.
  • Yuan Y; Chinese University of Hong Kong, Hong Kong.
  • Li H; City University of Hong Kong, Hong Kong.
  • Yang Y; University of California, San Francisco, United States of America.
  • Yuille A; Johns Hopkins University, United States of America.
  • Tang Y; NVIDIA, United States of America. Electronic address: yuchengt@nvidia.com.
  • Zhou Z; Johns Hopkins University, United States of America. Electronic address: zzhou82@jh.edu.
Med Image Anal ; 97: 103226, 2024 Jun 04.
Article em En | MEDLINE | ID: mdl-38852215
ABSTRACT
The advancement of artificial intelligence (AI) for organ segmentation and tumor detection is propelled by the growing availability of computed tomography (CT) datasets with detailed, per-voxel annotations. However, these AI models often struggle with flexibility for partially annotated datasets and extensibility for new classes due to limitations in the one-hot encoding, architectural design, and learning scheme. To overcome these limitations, we propose a universal, extensible framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes (e.g., organs/tumors). Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models, enriching semantic encoding compared with one-hot encoding. Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors and ease the addition of new classes. We train our Universal Model on 3410 CT volumes assembled from 14 publicly available datasets and then test it on 6173 CT volumes from four external datasets. Universal Model achieves first place on six CT tasks in the Medical Segmentation Decathlon (MSD) public leaderboard and leading performance on the Beyond The Cranial Vault (BTCV) dataset. In summary, Universal Model exhibits remarkable computational efficiency (6× faster than other dataset-specific models), demonstrates strong generalization across different hospitals, transfers well to numerous downstream tasks, and more importantly, facilitates the extensibility to new classes while alleviating the catastrophic forgetting of previously learned classes. Codes, models, and datasets are available at https//github.com/ljwztc/CLIP-Driven-Universal-Model.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Hong Kong

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Hong Kong