Your browser doesn't support javascript.
loading
Evaluation of ChatGPT-generated medical responses: A systematic review and meta-analysis.
Wei, Qiuhong; Yao, Zhengxiong; Cui, Ying; Wei, Bo; Jin, Zhezhen; Xu, Ximing.
Afiliação
  • Wei Q; Big Data Center for Children's Medical Care, Children's Hospital of Chongqing Medical University, Chongqing, China; Children Nutrition Research Center, Children's Hospital of Chongqing Medical University, Chongqing, China; National Clinical Research Center for Child Health and Disorders, Ministry of
  • Yao Z; Department of Neurology, Children's Hospital of Chongqing Medical University, Chongqing, China.
  • Cui Y; Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA.
  • Wei B; Department of Global Statistics and Data Science, BeiGene USA Inc., San Mateo, CA, USA.
  • Jin Z; Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY, USA.
  • Xu X; Big Data Center for Children's Medical Care, Children's Hospital of Chongqing Medical University, Chongqing, China.
J Biomed Inform ; 151: 104620, 2024 03.
Article em En | MEDLINE | ID: mdl-38462064
ABSTRACT

OBJECTIVE:

Large language models (LLMs) such as ChatGPT are increasingly explored in medical domains. However, the absence of standard guidelines for performance evaluation has led to methodological inconsistencies. This study aims to summarize the available evidence on evaluating ChatGPT's performance in answering medical questions and provide direction for future research.

METHODS:

An extensive literature search was conducted on June 15, 2023, across ten medical databases. The keyword used was "ChatGPT," without restrictions on publication type, language, or date. Studies evaluating ChatGPT's performance in answering medical questions were included. Exclusions comprised review articles, comments, patents, non-medical evaluations of ChatGPT, and preprint studies. Data was extracted on general study characteristics, question sources, conversation processes, assessment metrics, and performance of ChatGPT. An evaluation framework for LLM in medical inquiries was proposed by integrating insights from selected literature. This study is registered with PROSPERO, CRD42023456327.

RESULTS:

A total of 3520 articles were identified, of which 60 were reviewed and summarized in this paper and 17 were included in the meta-analysis. ChatGPT displayed an overall integrated accuracy of 56 % (95 % CI 51 %-60 %, I2 = 87 %) in addressing medical queries. However, the studies varied in question resource, question-asking process, and evaluation metrics. As per our proposed evaluation framework, many studies failed to report methodological details, such as the date of inquiry, version of ChatGPT, and inter-rater consistency.

CONCLUSION:

This review reveals ChatGPT's potential in addressing medical inquiries, but the heterogeneity of the study design and insufficient reporting might affect the results' reliability. Our proposed evaluation framework provides insights for the future study design and transparent reporting of LLM in responding to medical questions.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Comunicação Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Comunicação Idioma: En Ano de publicação: 2024 Tipo de documento: Article