Enriching contextualized language model from knowledge graph for biomedical information extraction.
Brief Bioinform
; 22(3)2021 05 20.
Article
em En
| MEDLINE
| ID: mdl-32591802
Biomedical information extraction (BioIE) is an important task. The aim is to analyze biomedical texts and extract structured information such as named entities and semantic relations between them. In recent years, pre-trained language models have largely improved the performance of BioIE. However, they neglect to incorporate external structural knowledge, which can provide rich factual information to support the underlying understanding and reasoning for biomedical information extraction. In this paper, we first evaluate current extraction methods, including vanilla neural networks, general language models and pre-trained contextualized language models on biomedical information extraction tasks, including named entity recognition, relation extraction and event extraction. We then propose to enrich a contextualized language model by integrating a large scale of biomedical knowledge graphs (namely, BioKGLM). In order to effectively encode knowledge, we explore a three-stage training procedure and introduce different fusion strategies to facilitate knowledge injection. Experimental results on multiple tasks show that BioKGLM consistently outperforms state-of-the-art extraction models. A further analysis proves that BioKGLM can capture the underlying relations between biomedical knowledge concepts, which are crucial for BioIE.
Palavras-chave
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Processamento de Linguagem Natural
/
Redes Neurais de Computação
/
Mineração de Dados
Idioma:
En
Revista:
Brief Bioinform
Assunto da revista:
BIOLOGIA
/
INFORMATICA MEDICA
Ano de publicação:
2021
Tipo de documento:
Article