Your browser doesn't support javascript.
loading
ChatGPT vs Medical Professional: Analyzing Responses to Laboratory Medicine Questions on Social Media.
Girton, Mark R; Greene, Dina N; Messerlian, Geralyn; Keren, David F; Yu, Min.
Affiliation
  • Girton MR; Department of Pathology, University of Michigan, Ann Arbor, MI, United States.
  • Greene DN; Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, United States.
  • Messerlian G; Department of Pathology, Women and Infants Hospital, Brown University, Providence, RI, United States.
  • Keren DF; Department of Pathology, University of Michigan, Ann Arbor, MI, United States.
  • Yu M; Department of Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, United States.
Clin Chem ; 2024 Jul 16.
Article in En | MEDLINE | ID: mdl-39013110
ABSTRACT

BACKGROUND:

The integration of ChatGPT, a large language model (LLM) developed by OpenAI, into healthcare has sparked significant interest due to its potential to enhance patient care and medical education. With the increasing trend of patients accessing laboratory results online, there is a pressing need to evaluate the effectiveness of ChatGPT in providing accurate laboratory medicine information. Our study evaluates ChatGPT's effectiveness in addressing patient questions in this area, comparing its performance with that of medical professionals on social media.

METHODS:

This study sourced patient questions and medical professional responses from Reddit and Quora, comparing them with responses generated by ChatGPT versions 3.5 and 4.0. Experienced laboratory medicine professionals evaluated the responses for quality and preference. Evaluation results were further analyzed using R software.

RESULTS:

The study analyzed 49 questions, with evaluators reviewing responses from both medical professionals and ChatGPT. ChatGPT's responses were preferred by 75.9% of evaluators and generally received higher ratings for quality. They were noted for their comprehensive and accurate information, whereas responses from medical professionals were valued for their conciseness. The interrater agreement was fair, indicating some subjectivity but a consistent preference for ChatGPT's detailed responses.

CONCLUSIONS:

ChatGPT demonstrates potential as an effective tool for addressing queries in laboratory medicine, often surpassing medical professionals in response quality. These results support the need for further research to confirm ChatGPT's utility and explore its integration into healthcare settings.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Clin Chem Journal subject: QUIMICA CLINICA Year: 2024 Document type: Article Affiliation country: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Clin Chem Journal subject: QUIMICA CLINICA Year: 2024 Document type: Article Affiliation country: United States
...