VetLLM: Large Language Model for Predicting Diagnosis from Veterinary Notes.
Pac Symp Biocomput
; 29: 120-133, 2024.
Article
em En
| MEDLINE
| ID: mdl-38160274
ABSTRACT
Lack of diagnosis coding is a barrier to leveraging veterinary notes for medical and public health research. Previous work is limited to develop specialized rule-based or customized supervised learning models to predict diagnosis coding, which is tedious and not easily transferable. In this work, we show that open-source large language models (LLMs) pretrained on general corpus can achieve reasonable performance in a zero-shot setting. Alpaca-7B can achieve a zero-shot F1 of 0.538 on CSU test data and 0.389 on PP test data, two standard benchmarks for coding from veterinary notes. Furthermore, with appropriate fine-tuning, the performance of LLMs can be substantially boosted, exceeding those of strong state-of-the-art supervised models. VetLLM, which is fine-tuned on Alpaca-7B using just 5000 veterinary notes, can achieve a F1 of 0.747 on CSU test data and 0.637 on PP test data. It is of note that our fine-tuning is data-efficient using 200 notes can outperform supervised models trained with more than 100,000 notes. The findings demonstrate the great potential of leveraging LLMs for language processing tasks in medicine, and we advocate this new paradigm for processing clinical text.
Buscar no Google
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Camelídeos Americanos
Limite:
Animals
/
Humans
Idioma:
En
Revista:
Pac Symp Biocomput
Assunto da revista:
BIOTECNOLOGIA
/
INFORMATICA MEDICA
Ano de publicação:
2024
Tipo de documento:
Article
País de afiliação:
Estados Unidos