Your browser doesn't support javascript.
loading
Evaluating Large Language Models for Automated Reporting and Data Systems Categorization: Cross-Sectional Study.
Wu, Qingxia; Wu, Qingxia; Li, Huali; Wang, Yan; Bai, Yan; Wu, Yaping; Yu, Xuan; Li, Xiaodong; Dong, Pei; Xue, Jon; Shen, Dinggang; Wang, Meiyun.
Afiliação
  • Wu Q; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Wu Q; Research Intelligence Department, Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, China.
  • Li H; Research and Collaboration, United Imaging Intelligence (Beijing) Co, Ltd, Beijing, China.
  • Wang Y; Department of Radiology, Luoyang Central Hospital, Luoyang, China.
  • Bai Y; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Wu Y; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Yu X; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Li X; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Dong P; Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China.
  • Xue J; Research Intelligence Department, Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, China.
  • Shen D; Research and Collaboration, United Imaging Intelligence (Beijing) Co, Ltd, Beijing, China.
  • Wang M; Research and Collaboration, Shanghai United Imaging Intelligence Co, Ltd, Shanghai, China.
JMIR Med Inform ; 12: e55799, 2024 Jul 17.
Article em En | MEDLINE | ID: mdl-39018102
ABSTRACT

BACKGROUND:

Large language models show promise for improving radiology workflows, but their performance on structured radiological tasks such as Reporting and Data Systems (RADS) categorization remains unexplored.

OBJECTIVE:

This study aims to evaluate 3 large language model chatbots-Claude-2, GPT-3.5, and GPT-4-on assigning RADS categories to radiology reports and assess the impact of different prompting strategies.

METHODS:

This cross-sectional study compared 3 chatbots using 30 radiology reports (10 per RADS criteria), using a 3-level prompting strategy zero-shot, few-shot, and guideline PDF-informed prompts. The cases were grounded in Liver Imaging Reporting & Data System (LI-RADS) version 2018, Lung CT (computed tomography) Screening Reporting & Data System (Lung-RADS) version 2022, and Ovarian-Adnexal Reporting & Data System (O-RADS) magnetic resonance imaging, meticulously prepared by board-certified radiologists. Each report underwent 6 assessments. Two blinded reviewers assessed the chatbots' response at patient-level RADS categorization and overall ratings. The agreement across repetitions was assessed using Fleiss κ.

RESULTS:

Claude-2 achieved the highest accuracy in overall ratings with few-shot prompts and guideline PDFs (prompt-2), attaining 57% (17/30) average accuracy over 6 runs and 50% (15/30) accuracy with k-pass voting. Without prompt engineering, all chatbots performed poorly. The introduction of a structured exemplar prompt (prompt-1) increased the accuracy of overall ratings for all chatbots. Providing prompt-2 further improved Claude-2's performance, an enhancement not replicated by GPT-4. The interrun agreement was substantial for Claude-2 (k=0.66 for overall rating and k=0.69 for RADS categorization), fair for GPT-4 (k=0.39 for both), and fair for GPT-3.5 (k=0.21 for overall rating and k=0.39 for RADS categorization). All chatbots showed significantly higher accuracy with LI-RADS version 2018 than with Lung-RADS version 2022 and O-RADS (P<.05); with prompt-2, Claude-2 achieved the highest overall rating accuracy of 75% (45/60) in LI-RADS version 2018.

CONCLUSIONS:

When equipped with structured prompts and guideline PDFs, Claude-2 demonstrated potential in assigning RADS categories to radiology cases according to established criteria such as LI-RADS version 2018. However, the current generation of chatbots lags in accurately categorizing cases based on more recent RADS criteria.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article