Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brief Bioinform ; 23(4)2022 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-35849019

RESUMO

Medical Dialogue Information Extraction (MDIE) is a promising task for modern medical care systems, which greatly facilitates the development of many real-world applications such as electronic medical record generation, automatic disease diagnosis, etc. Recent methods have firstly achieved considerable performance in Chinese MDIE but still suffer from some inherent limitations, such as poor exploitation of the inter-dependencies in multiple utterances, weak discrimination of the hard samples. In this paper, we propose a contrastive multi-utterance inference (CMUI) method to address these issues. Specifically, we first use a type-aware encoder to provide an efficient encode mechanism toward different categories. Subsequently, we introduce a selective attention mechanism to explicitly capture the dependencies among utterances, which thus constructs a multi-utterance inference. Finally, a supervised contrastive learning approach is integrated into our framework to improve the recognition ability for the hard samples. Extensive experiments show that our model achieves state-of-the-art performance on a public benchmark Chinese-based dataset and delivers significant performance gain on MDIE as compared with baselines. Specifically, we outperform the state-of-the-art results in F1-score by 2.27%, 0.55% in Recall and 3.61% in Precision (The codes that support the findings of this study are openly available in CMUI at https://github.com/jc4357/CMUI.).


Assuntos
Aprendizado Profundo , Armazenamento e Recuperação da Informação , Benchmarking , China , Registros Eletrônicos de Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA