Your browser doesn't support javascript.
loading
Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords.
Gurunath Shivakumar, Prashanth; Georgiou, Panayiotis; Narayanan, Shrikanth.
Afiliação
  • Gurunath Shivakumar P; Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, United States of America.
  • Georgiou P; Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, United States of America.
  • Narayanan S; Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, United States of America.
PLoS One ; 17(3): e0264488, 2022.
Article em En | MEDLINE | ID: mdl-35245327
Word vector representations enable machines to encode human language for spoken language understanding and processing. Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information. Confusion2vec provides a robust spoken language representation by considering inherent human language ambiguities. In this paper, we propose a novel word vector space estimation by unsupervised learning on lattices output by an automatic speech recognition (ASR) system. We encode each word in Confusion2vec vector space by its constituent subword character n-grams. We show that the subword encoding helps better represent the acoustic perceptual ambiguities in human spoken language via information modeled on lattice-structured ASR output. The usefulness of the proposed Confusion2vec representation is evaluated using analogy and word similarity tasks designed for assessing semantic, syntactic and acoustic word relations. We also show the benefits of subword modeling for acoustic ambiguity representation on the task of spoken language intent detection. The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset. We demonstrate that Confusion2vec subword modeling eliminates the need for retraining/adapting the natural language understanding models on ASR transcripts.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala / Idioma Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Percepção da Fala / Idioma Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article