Your browser doesn't support javascript.
loading
Assessing unknown potential-quality and limitations of different large language models in the field of otorhinolaryngology.
Buhr, Christoph R; Smith, Harry; Huppertz, Tilman; Bahr-Hamm, Katharina; Matthias, Christoph; Cuny, Clemens; Snijders, Jan Phillipp; Ernst, Benjamin Philipp; Blaikie, Andrew; Kelsey, Tom; Kuhn, Sebastian; Eckrich, Jonas.
Afiliação
  • Buhr CR; Department of Otorhinolaryngology, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
  • Smith H; School of Medicine, University of St Andrews, St Andrews, UK.
  • Huppertz T; School of Computer Science, University of St Andrews, St Andrews, UK.
  • Bahr-Hamm K; Department of Otorhinolaryngology, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
  • Matthias C; Department of Otorhinolaryngology, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
  • Cuny C; Department of Otorhinolaryngology, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
  • Snijders JP; Outpatient Clinic, Clemens Cuny, Dieburg, Germany.
  • Ernst BP; Outpatient Clinic, Clemens Cuny, Dieburg, Germany.
  • Blaikie A; Department of Otorhinolaryngology, University Hospital Frankfurt, Frankfurt, Germany.
  • Kelsey T; School of Medicine, University of St Andrews, St Andrews, UK.
  • Kuhn S; School of Computer Science, University of St Andrews, St Andrews, UK.
  • Eckrich J; Institute for Digital Medicine, Philipps-University Marburg, University Hospital of Giessen and Marburg, Marburg, Germany.
Acta Otolaryngol ; 144(3): 237-242, 2024 Mar.
Article em En | MEDLINE | ID: mdl-38781053
ABSTRACT

BACKGROUND:

Large Language Models (LLMs) might offer a solution for the lack of trained health personnel, particularly in low- and middle-income countries. However, their strengths and weaknesses remain unclear. AIMS/

OBJECTIVES:

Here we benchmark different LLMs (Bard 2023.07.13, Claude 2, ChatGPT 4) against six consultants in otorhinolaryngology (ORL). MATERIAL AND

METHODS:

Case-based questions were extracted from literature and German state examinations. Answers from Bard 2023.07.13, Claude 2, ChatGPT 4, and six ORL consultants were rated blindly on a 6-point Likert-scale for medical adequacy, comprehensibility, coherence, and conciseness. Given answers were compared to validated answers and evaluated for hazards. A modified Turing test was performed and character counts were compared.

RESULTS:

LLMs answers ranked inferior to consultants in all categories. Yet, the difference between consultants and LLMs was marginal, with the clearest disparity in conciseness and the smallest in comprehensibility. Among LLMs Claude 2 was rated best in medical adequacy and conciseness. Consultants' answers matched the validated solution in 93% (228/246), ChatGPT 4 in 85% (35/41), Claude 2 in 78% (32/41), and Bard 2023.07.13 in 59% (24/41). Answers were rated as potentially hazardous in 10% (24/246) for ChatGPT 4, 14% (34/246) for Claude 2, 19% (46/264) for Bard 2023.07.13, and 6% (71/1230) for consultants. CONCLUSIONS AND

SIGNIFICANCE:

Despite consultants superior performance, LLMs show potential for clinical application in ORL. Future studies should assess their performance on larger scale.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Otolaringologia País/Região como assunto: Europa Idioma: En Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Alemanha

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Otolaringologia País/Região como assunto: Europa Idioma: En Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Alemanha