Your browser doesn't support javascript.
loading
Evaluating large language models on medical, lay language, and self-reported descriptions of genetic conditions.
Flaharty, Kendall A; Hu, Ping; Hanchard, Suzanna Ledgister; Ripper, Molly E; Duong, Dat; Waikel, Rebekah L; Solomon, Benjamin D.
Afiliação
  • Flaharty KA; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA. Electronic address: kendall.flaharty@nih.gov.
  • Hu P; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA.
  • Hanchard SL; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA.
  • Ripper ME; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA.
  • Duong D; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA.
  • Waikel RL; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA.
  • Solomon BD; Medical Genomics Unit, National Human Genome Research Institute, National Institutes of Health, 10 Center Dr, Bethesda, MD 20892, USA. Electronic address: solomonb@mail.nih.gov.
Am J Hum Genet ; 2024 Jul 31.
Article em En | MEDLINE | ID: mdl-39146935
ABSTRACT
Large language models (LLMs) are generating interest in medical settings. For example, LLMs can respond coherently to medical queries by providing plausible differential diagnoses based on clinical notes. However, there are many questions to explore, such as evaluating differences between open- and closed-source LLMs as well as LLM performance on queries from both medical and non-medical users. In this study, we assessed multiple LLMs, including Llama-2-chat, Vicuna, Medllama2, Bard/Gemini, Claude, ChatGPT3.5, and ChatGPT-4, as well as non-LLM approaches (Google search and Phenomizer) regarding their ability to identify genetic conditions from textbook-like clinician questions and their corresponding layperson translations related to 63 genetic conditions. For open-source LLMs, larger models were more accurate than smaller LLMs 7b, 13b, and larger than 33b parameter models obtained accuracy ranges from 21%-49%, 41%-51%, and 54%-68%, respectively. Closed-source LLMs outperformed open-source LLMs, with ChatGPT-4 performing best (89%-90%). Three of 11 LLMs and Google search had significant performance gaps between clinician and layperson prompts. We also evaluated how in-context prompting and keyword removal affected open-source LLM performance. Models were provided with 2 types of in-context prompts list-type prompts, which improved LLM performance, and definition-type prompts, which did not. We further analyzed removal of rare terms from descriptions, which decreased accuracy for 5 of 7 evaluated LLMs. Finally, we observed much lower performance with real individuals' descriptions; LLMs answered these questions with a maximum 21% accuracy.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Am J Hum Genet Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Am J Hum Genet Ano de publicação: 2024 Tipo de documento: Article