Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 20349, 2022 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-36437277

RESUMO

Currently, multiturn dialogue models generate human-like responses based on pretrained language models given a dialogue history. However, most existing models simply concatenate dialogue histories, which makes it difficult to maintain a high degree of consistency throughout the generated text. We speculate that this is because the encoder ignores information about the hierarchical structure between sentences. In this paper, we propose a novel multiturn dialogue generation model that captures contextual information at the sentence level and at the discourse level during the encoding process. The context semantic information is dynamically modeled through a difference-aware module. A sentence order prediction training task is also designed to learn representation by reconstructing the order of disrupted sentences with a learning-to-rank algorithm. Experiments on the multiturn dialogue dataset, DailyDialog, demonstrate that our model substantially outperforms the baseline model in terms of both automatic and human evaluation metrics, generating more fluent and informative responses than the baseline model.


Assuntos
Idioma , Semântica , Humanos , Aprendizagem , Algoritmos
2.
Sci Rep ; 11(1): 22750, 2021 11 23.
Artigo em Inglês | MEDLINE | ID: mdl-34815423

RESUMO

Generating fluent, coherent, and informative text from structured data is called table-to-text generation. Copying words from the table is a common method to solve the "out-of-vocabulary" problem, but it's difficult to achieve accurate copying. In order to overcome this problem, we invent an auto-regressive framework based on the transformer that combines a copying mechanism and language modeling to generate target texts. Firstly, to make the model better learn the semantic relevance between table and text, we apply a word transformation method, which incorporates the field and position information into the target text to acquire the position of where to copy. Then we propose two auxiliary learning objectives, namely table-text constraint loss and copy loss. Table-text constraint loss is used to effectively model table inputs, whereas copy loss is exploited to precisely copy word fragments from a table. Furthermore, we improve the text search strategy to reduce the probability of generating incoherent and repetitive sentences. The model is verified by experiments on two datasets and better results are obtained than the baseline model. On WIKIBIO, the result is improved from 45.47 to 46.87 on BLEU and from 41.54 to 42.28 on ROUGE. On ROTOWIRE, the result is increased by 4.29% on CO metric, and 1.93 points higher on BLEU.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa