Your browser doesn't support javascript.
loading
Quality of Answers of Generative Large Language Models Versus Peer Users for Interpreting Laboratory Test Results for Lay Patients: Evaluation Study.
He, Zhe; Bhasuran, Balu; Jin, Qiao; Tian, Shubo; Hanna, Karim; Shavor, Cindy; Arguello, Lisbeth Garcia; Murray, Patrick; Lu, Zhiyong.
Afiliación
  • He Z; School of Information, Florida State University, Tallahassee, FL, United States.
  • Bhasuran B; School of Information, Florida State University, Tallahassee, FL, United States.
  • Jin Q; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Tian S; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Hanna K; Morsani College of Medicine, University of South Florida, Tampa, FL, United States.
  • Shavor C; Morsani College of Medicine, University of South Florida, Tampa, FL, United States.
  • Arguello LG; Morsani College of Medicine, University of South Florida, Tampa, FL, United States.
  • Murray P; Morsani College of Medicine, University of South Florida, Tampa, FL, United States.
  • Lu Z; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
J Med Internet Res ; 26: e56655, 2024 Apr 17.
Article en En | MEDLINE | ID: mdl-38630520
ABSTRACT

BACKGROUND:

Although patients have easy access to their electronic health records and laboratory test result data through patient portals, laboratory test results are often confusing and hard to understand. Many patients turn to web-based forums or question-and-answer (Q&A) sites to seek advice from their peers. The quality of answers from social Q&A sites on health-related questions varies significantly, and not all responses are accurate or reliable. Large language models (LLMs) such as ChatGPT have opened a promising avenue for patients to have their questions answered.

OBJECTIVE:

We aimed to assess the feasibility of using LLMs to generate relevant, accurate, helpful, and unharmful responses to laboratory test-related questions asked by patients and identify potential issues that can be mitigated using augmentation approaches.

METHODS:

We collected laboratory test result-related Q&A data from Yahoo! Answers and selected 53 Q&A pairs for this study. Using the LangChain framework and ChatGPT web portal, we generated responses to the 53 questions from 5 LLMs GPT-4, GPT-3.5, LLaMA 2, MedAlpaca, and ORCA_mini. We assessed the similarity of their answers using standard Q&A similarity-based evaluation metrics, including Recall-Oriented Understudy for Gisting Evaluation, Bilingual Evaluation Understudy, Metric for Evaluation of Translation With Explicit Ordering, and Bidirectional Encoder Representations from Transformers Score. We used an LLM-based evaluator to judge whether a target model had higher quality in terms of relevance, correctness, helpfulness, and safety than the baseline model. We performed a manual evaluation with medical experts for all the responses to 7 selected questions on the same 4 aspects.

RESULTS:

Regarding the similarity of the responses from 4 LLMs; the GPT-4 output was used as the reference answer, the responses from GPT-3.5 were the most similar, followed by those from LLaMA 2, ORCA_mini, and MedAlpaca. Human answers from Yahoo data were scored the lowest and, thus, as the least similar to GPT-4-generated answers. The results of the win rate and medical expert evaluation both showed that GPT-4's responses achieved better scores than all the other LLM responses and human responses on all 4 aspects (relevance, correctness, helpfulness, and safety). LLM responses occasionally also suffered from lack of interpretation in one's medical context, incorrect statements, and lack of references.

CONCLUSIONS:

By evaluating LLMs in generating responses to patients' laboratory test result-related questions, we found that, compared to other 4 LLMs and human answers from a Q&A website, GPT-4's responses were more accurate, helpful, relevant, and safer. There were cases in which GPT-4 responses were inaccurate and not individualized. We identified a number of ways to improve the quality of LLM responses, including prompt engineering, prompt augmentation, retrieval-augmented generation, and response evaluation.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Inteligencia Artificial / Registros Electrónicos de Salud Límite: Humans Idioma: En Revista: J Med Internet Res Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Inteligencia Artificial / Registros Electrónicos de Salud Límite: Humans Idioma: En Revista: J Med Internet Res Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos