Your browser doesn't support javascript.
loading
Medical Vision-Language Pre-Training for Brain Abnormalities.
Monajatipoor, Masoud; Dou, Zi-Yi; Chien, Aichi; Peng, Nanyun; Chang, Kai-Wei.
Afiliación
  • Monajatipoor M; UCLA.
  • Dou ZY; UCLA.
  • Chien A; UCLA.
  • Peng N; UCLA.
  • Chang KW; UCLA.
Proc Conf Assoc Comput Linguist Meet ; 2024(LREC/COLING): 11159-11164, 2024 May.
Article en En | MEDLINE | ID: mdl-39006531
ABSTRACT
Vision-language models have become increasingly powerful for tasks that require an understanding of both visual and linguistic elements, bridging the gap between these modalities. In the context of multimodal clinical AI, there is a growing need for models that possess domain-specific knowledge, as existing models often lack the expertise required for medical applications. In this paper, we take brain abnormalities as an example to demonstrate how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed. In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset from case reports and published journals and subsequently constructing a high-performance vision-language model tailored to specific medical tasks. We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain. We evaluated the resulting model with quantitative and qualitative intrinsic evaluations. The resulting dataset and our code can be found here https//github.com/masoud-monajati/MedVL_pretraining_pipeline.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc Conf Assoc Comput Linguist Meet Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc Conf Assoc Comput Linguist Meet Año: 2024 Tipo del documento: Article
...