Your browser doesn't support javascript.
loading
Leveraging Professional Radiologists' Expertise to Enhance LLMs' Evaluation for Radiology Reports.
Zhu, Qingqing; Chen, Xiuying; Jin, Qiao; Hou, Benjamin; Mathai, Tejas Sudharshan; Mukherjee, Pritam; Gao, Xin; Summers, Ronald M; Lu, Zhiyong.
Afiliación
  • Zhu Q; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
  • Chen X; Bioscience Reseach Center, King Abdullah University of Science & Technology, Saudi Arabia.
  • Jin Q; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
  • Hou B; Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
  • Mathai TS; Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
  • Mukherjee P; Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
  • Gao X; Bioscience Reseach Center, King Abdullah University of Science & Technology, Saudi Arabia.
  • Summers RM; Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
  • Lu Z; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
ArXiv ; 2024 Feb 17.
Article en En | MEDLINE | ID: mdl-38903745
ABSTRACT
In radiology, Artificial Intelligence (AI) has significantly advanced report generation, but automatic evaluation of these AI-produced reports remains challenging. Current metrics, such as Conventional Natural Language Generation (NLG) and Clinical Efficacy (CE), often fall short in capturing the semantic intricacies of clinical contexts or overemphasize clinical details, undermining report clarity. To overcome these issues, our proposed method synergizes the expertise of professional radiologists with Large Language Models (LLMs), like GPT-3.5 and GPT-4. Utilizing In-Context Instruction Learning (ICIL) and Chain of Thought (CoT) reasoning, our approach aligns LLM evaluations with radiologist standards, enabling detailed comparisons between human and AI-generated reports. This is further enhanced by a Regression model that aggregates sentence evaluation scores. Experimental results show that our "Detailed GPT-4 (5-shot)" model achieves a 0.48 score, outperforming the METEOR metric by 0.19, while our "Regressed GPT-4" model shows even greater alignment with expert evaluations, exceeding the best existing metric by a 0.35 margin. Moreover, the robustness of our explanations has been validated through a thorough iterative strategy. We plan to publicly release annotations from radiology experts, setting a new standard for accuracy in future assessments. This underscores the potential of our approach in enhancing the quality assessment of AI-driven medical reports.

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: ArXiv Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: ArXiv Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos