Your browser doesn't support javascript.
loading
Utterance Clustering Using Stereo Audio Channels.
Dong, Yingjun; MacLaren, Neil G; Cao, Yiding; Yammarino, Francis J; Dionne, Shelley D; Mumford, Michael D; Connelly, Shane; Sayama, Hiroki; Ruark, Gregory A.
Afiliação
  • Dong Y; Center for Collective Dynamics of Complex Systems, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • MacLaren NG; Department of Systems Science and Industrial Engineering, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • Cao Y; Center for Collective Dynamics of Complex Systems, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • Yammarino FJ; Bernard M. and Ruth R. Bass Center for Leadership Studies, School of Management, Binghamton University, State University of New York, Binghamton, NY, USA.
  • Dionne SD; Center for Collective Dynamics of Complex Systems, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • Mumford MD; Department of Systems Science and Industrial Engineering, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • Connelly S; Center for Collective Dynamics of Complex Systems, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
  • Sayama H; Bernard M. and Ruth R. Bass Center for Leadership Studies, School of Management, Binghamton University, State University of New York, Binghamton, NY, USA.
  • Ruark GA; Center for Collective Dynamics of Complex Systems, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA.
Comput Intell Neurosci ; 2021: 6151651, 2021.
Article em En | MEDLINE | ID: mdl-34616446
ABSTRACT
Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- and right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Análise por Conglomerados Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Análise por Conglomerados Idioma: En Ano de publicação: 2021 Tipo de documento: Article