Your browser doesn't support javascript.
loading
Enhancing Clinical Relevance of Pretrained Language Models Through Integration of External Knowledge: Case Study on Cardiovascular Diagnosis From Electronic Health Records.
Lu, Qiuhao; Wen, Andrew; Nguyen, Thien; Liu, Hongfang.
Affiliation
  • Lu Q; McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, United States.
  • Wen A; Department of AI and Informatics, Mayo Clinic, Rochester, MN, United States.
  • Nguyen T; Department of Computer Science, University of Oregon, Eugene, OR, United States.
  • Liu H; McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, United States.
JMIR AI ; 3: e56932, 2024 Aug 06.
Article de En | MEDLINE | ID: mdl-39106099
ABSTRACT

BACKGROUND:

Despite their growing use in health care, pretrained language models (PLMs) often lack clinical relevance due to insufficient domain expertise and poor interpretability. A key strategy to overcome these challenges is integrating external knowledge into PLMs, enhancing their adaptability and clinical usefulness. Current biomedical knowledge graphs like UMLS (Unified Medical Language System), SNOMED CT (Systematized Medical Nomenclature for Medicine-Clinical Terminology), and HPO (Human Phenotype Ontology), while comprehensive, fail to effectively connect general biomedical knowledge with physician insights. There is an equally important need for a model that integrates diverse knowledge in a way that is both unified and compartmentalized. This approach not only addresses the heterogeneous nature of domain knowledge but also recognizes the unique data and knowledge repositories of individual health care institutions, necessitating careful and respectful management of proprietary information.

OBJECTIVE:

This study aimed to enhance the clinical relevance and interpretability of PLMs by integrating external knowledge in a manner that respects the diversity and proprietary nature of health care data. We hypothesize that domain knowledge, when captured and distributed as stand-alone modules, can be effectively reintegrated into PLMs to significantly improve their adaptability and utility in clinical settings.

METHODS:

We demonstrate that through adapters, small and lightweight neural networks that enable the integration of extra information without full model fine-tuning, we can inject diverse sources of external domain knowledge into language models and improve the overall performance with an increased level of interpretability. As a practical application of this methodology, we introduce a novel task, structured as a case study, that endeavors to capture physician knowledge in assigning cardiovascular diagnoses from clinical narratives, where we extract diagnosis-comment pairs from electronic health records (EHRs) and cast the problem as text classification.

RESULTS:

The study demonstrates that integrating domain knowledge into PLMs significantly improves their performance. While improvements with ClinicalBERT are more modest, likely due to its pretraining on clinical texts, BERT (bidirectional encoder representations from transformer) equipped with knowledge adapters surprisingly matches or exceeds ClinicalBERT in several metrics. This underscores the effectiveness of knowledge adapters and highlights their potential in settings with strict data privacy constraints. This approach also increases the level of interpretability of these models in a clinical context, which enhances our ability to precisely identify and apply the most relevant domain knowledge for specific tasks, thereby optimizing the model's performance and tailoring it to meet specific clinical needs.

CONCLUSIONS:

This research provides a basis for creating health knowledge graphs infused with physician knowledge, marking a significant step forward for PLMs in health care. Notably, the model balances integrating knowledge both comprehensively and selectively, addressing the heterogeneous nature of medical knowledge and the privacy needs of health care institutions.
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: JMIR AI Année: 2024 Type de document: Article Pays d'affiliation: États-Unis d'Amérique Pays de publication: Canada

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: JMIR AI Année: 2024 Type de document: Article Pays d'affiliation: États-Unis d'Amérique Pays de publication: Canada