Your browser doesn't support javascript.
loading
Assessing GPT-4's Performance in Delivering Medical Advice: Comparative Analysis With Human Experts.
Jo, Eunbeen; Song, Sanghoun; Kim, Jong-Ho; Lim, Subin; Kim, Ju Hyeon; Cha, Jung-Joon; Kim, Young-Min; Joo, Hyung Joon.
Affiliation
  • Jo E; Department of Medical Informatics, Korea University College of Medicine, Seoul, Republic of Korea.
  • Song S; Department of Linguistics, Korea University, Seoul, Republic of Korea.
  • Kim JH; Korea University Research Institute for Medical Bigdata Science, Korea University, Seoul, Republic of Korea.
  • Lim S; Department of Cardiology, Cardiovascular Center, Korea University College of Medicine, Seoul, Republic of Korea.
  • Kim JH; Division of Cardiology, Department of Internal Medicine, Korea University Anam Hospital, Seoul, Republic of Korea.
  • Cha JJ; Division of Cardiology, Department of Internal Medicine, Korea University Anam Hospital, Seoul, Republic of Korea.
  • Kim YM; Division of Cardiology, Department of Internal Medicine, Korea University Anam Hospital, Seoul, Republic of Korea.
  • Joo HJ; School of Interdisciplinary Industrial Studies, Hanyang University, Seoul, Republic of Korea.
JMIR Med Educ ; 10: e51282, 2024 Jul 08.
Article in En | MEDLINE | ID: mdl-38989848
ABSTRACT

Background:

Accurate medical advice is paramount in ensuring optimal patient care, and misinformation can lead to misguided decisions with potentially detrimental health outcomes. The emergence of large language models (LLMs) such as OpenAI's GPT-4 has spurred interest in their potential health care applications, particularly in automated medical consultation. Yet, rigorous investigations comparing their performance to human experts remain sparse.

Objective:

This study aims to compare the medical accuracy of GPT-4 with human experts in providing medical advice using real-world user-generated queries, with a specific focus on cardiology. It also sought to analyze the performance of GPT-4 and human experts in specific question categories, including drug or medication information and preliminary diagnoses.

Methods:

We collected 251 pairs of cardiology-specific questions from general users and answers from human experts via an internet portal. GPT-4 was tasked with generating responses to the same questions. Three independent cardiologists (SL, JHK, and JJC) evaluated the answers provided by both human experts and GPT-4. Using a computer interface, each evaluator compared the pairs and determined which answer was superior, and they quantitatively measured the clarity and complexity of the questions as well as the accuracy and appropriateness of the responses, applying a 3-tiered grading scale (low, medium, and high). Furthermore, a linguistic analysis was conducted to compare the length and vocabulary diversity of the responses using word count and type-token ratio.

Results:

GPT-4 and human experts displayed comparable efficacy in medical accuracy ("GPT-4 is better" at 132/251, 52.6% vs "Human expert is better" at 119/251, 47.4%). In accuracy level categorization, humans had more high-accuracy responses than GPT-4 (50/237, 21.1% vs 30/238, 12.6%) but also a greater proportion of low-accuracy responses (11/237, 4.6% vs 1/238, 0.4%; P=.001). GPT-4 responses were generally longer and used a less diverse vocabulary than those of human experts, potentially enhancing their comprehensibility for general users (sentence count mean 10.9, SD 4.2 vs mean 5.9, SD 3.7; P<.001; type-token ratio mean 0.69, SD 0.07 vs mean 0.79, SD 0.09; P<.001). Nevertheless, human experts outperformed GPT-4 in specific question categories, notably those related to drug or medication information and preliminary diagnoses. These findings highlight the limitations of GPT-4 in providing advice based on clinical experience.

Conclusions:

GPT-4 has shown promising potential in automated medical consultation, with comparable medical accuracy to human experts. However, challenges remain particularly in the realm of nuanced clinical judgment. Future improvements in LLMs may require the integration of specific clinical reasoning pathways and regulatory oversight for safe use. Further research is needed to understand the full potential of LLMs across various medical specialties and conditions.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cardiology Limits: Humans Language: En Journal: JMIR Med Educ Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cardiology Limits: Humans Language: En Journal: JMIR Med Educ Year: 2024 Document type: Article
...