Your browser doesn't support javascript.
loading
Fine-tuning large language models for rare disease concept normalization.
Wang, Andy; Liu, Cong; Yang, Jingye; Weng, Chunhua.
Afiliação
  • Wang A; Peddie School, Hightstown, NJ 08520, United States.
  • Liu C; Department of Biomedical Informatics, Columbia University, New York, NY 10032, United States.
  • Yang J; Department of Biomedical Informatics, Columbia University, New York, NY 10032, United States.
  • Weng C; Department of Mathematics, University of Pennsylvania, Philadelphia, PA 19104, United States.
J Am Med Inform Assoc ; 31(9): 2076-2083, 2024 Sep 01.
Article em En | MEDLINE | ID: mdl-38829731
ABSTRACT

OBJECTIVE:

We aim to develop a novel method for rare disease concept normalization by fine-tuning Llama 2, an open-source large language model (LLM), using a domain-specific corpus sourced from the Human Phenotype Ontology (HPO).

METHODS:

We developed an in-house template-based script to generate two corpora for fine-tuning. The first (NAME) contains standardized HPO names, sourced from the HPO vocabularies, along with their corresponding identifiers. The second (NAME+SYN) includes HPO names and half of the concept's synonyms as well as identifiers. Subsequently, we fine-tuned Llama 2 (Llama2-7B) for each sentence set and conducted an evaluation using a range of sentence prompts and various phenotype terms.

RESULTS:

When the phenotype terms for normalization were included in the fine-tuning corpora, both models demonstrated nearly perfect performance, averaging over 99% accuracy. In comparison, ChatGPT-3.5 has only ∼20% accuracy in identifying HPO IDs for phenotype terms. When single-character typos were introduced in the phenotype terms, the accuracy of NAME and NAME+SYN is 10.2% and 36.1%, respectively, but increases to 61.8% (NAME+SYN) with additional typo-specific fine-tuning. For terms sourced from HPO vocabularies as unseen synonyms, the NAME model achieved 11.2% accuracy, while the NAME+SYN model achieved 92.7% accuracy.

CONCLUSION:

Our fine-tuned models demonstrate ability to normalize phenotype terms unseen in the fine-tuning corpus, including misspellings, synonyms, terms from other ontologies, and laymen's terms. Our approach provides a solution for the use of LLMs to identify named medical entities from clinical narratives, while successfully normalizing them to standard concepts in a controlled vocabulary.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fenótipo / Processamento de Linguagem Natural / Vocabulário Controlado / Doenças Raras / Ontologias Biológicas Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fenótipo / Processamento de Linguagem Natural / Vocabulário Controlado / Doenças Raras / Ontologias Biológicas Idioma: En Ano de publicação: 2024 Tipo de documento: Article