Your browser doesn't support javascript.
loading
Vision-Language Model for Generating Textual Descriptions From Clinical Images: Model Development and Validation Study.
Ji, Jia; Hou, Yongshuai; Chen, Xinyu; Pan, Youcheng; Xiang, Yang.
Afiliação
  • Ji J; Shenzhen Institute of Information Technology, Shenzhen, China.
  • Hou Y; Peng Cheng Laboratory, Shenzhen, China.
  • Chen X; Harbin Institute of Technology, Shenzhen, China.
  • Pan Y; Peng Cheng Laboratory, Shenzhen, China.
  • Xiang Y; Peng Cheng Laboratory, Shenzhen, China.
JMIR Form Res ; 8: e32690, 2024 Feb 08.
Article em En | MEDLINE | ID: mdl-38329788
ABSTRACT

BACKGROUND:

The automatic generation of radiology reports, which seeks to create a free-text description from a clinical radiograph, is emerging as a pivotal intersection between clinical medicine and artificial intelligence. Leveraging natural language processing technologies can accelerate report creation, enhancing health care quality and standardization. However, most existing studies have not yet fully tapped into the combined potential of advanced language and vision models.

OBJECTIVE:

The purpose of this study was to explore the integration of pretrained vision-language models into radiology report generation. This would enable the vision-language model to automatically convert clinical images into high-quality textual reports.

METHODS:

In our research, we introduced a radiology report generation model named ClinicalBLIP, building upon the foundational InstructBLIP model and refining it using clinical image-to-text data sets. A multistage fine-tuning approach via low-rank adaptation was proposed to deepen the semantic comprehension of the visual encoder and the large language model for clinical imagery. Furthermore, prior knowledge was integrated through prompt learning to enhance the precision of the reports generated. Experiments were conducted on both the IU X-RAY and MIMIC-CXR data sets, with ClinicalBLIP compared to several leading methods.

RESULTS:

Experimental results revealed that ClinicalBLIP obtained superior scores of 0.570/0.365 and 0.534/0.313 on the IU X-RAY/MIMIC-CXR test sets for the Metric for Evaluation of Translation with Explicit Ordering (METEOR) and the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluations, respectively. This performance notably surpasses that of existing state-of-the-art methods. Further evaluations confirmed the effectiveness of the multistage fine-tuning and the integration of prior information, leading to substantial improvements.

CONCLUSIONS:

The proposed ClinicalBLIP model demonstrated robustness and effectiveness in enhancing clinical radiology report generation, suggesting significant promise for real-world clinical applications.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: JMIR Form Res Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: JMIR Form Res Ano de publicação: 2024 Tipo de documento: Article