Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Biol Cybern ; 116(4): 389-406, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35727351

RESUMO

In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called "auto-associative multilayer perceptrons") were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134-151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417-441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in "deep neural networks" (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1-17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder-decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.


Assuntos
Redes Neurais de Computação , Dinâmica não Linear , Análise de Componente Principal
2.
J Acoust Soc Am ; 116(3): 1781-92, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15478445

RESUMO

Numerous attempts have been made to find low-dimensional, formant-related representations of speech signals that are suitable for automatic speech recognition. However, it is often not known how these features behave in comparison with true formants. The purpose of this study was to compare two sets of automatically extracted formant-like features, i.e., robust formants and HMM2 features, to hand-labeled formants. The robust formant features were derived by means of the split Levinson algorithm while the HMM2 features correspond to the frequency segmentation of speech signals obtained by two-dimensional hidden Markov models. Mel-frequency cepstral coefficients (MFCCs) were also included in the investigation as an example of state-of-the-art automatic speech recognition features. The feature sets were compared in terms of their performance on a vowel classification task. The speech data and hand-labeled formants that were used in this study are a subset of the American English vowels database presented in Hillenbrand et al. [J. Acoust. Soc. Am. 97, 3099-3111 (1995)]. Classification performance was measured on the original, clean data and in noisy acoustic conditions. When using clean data, the classification performance of the formant-like features compared very well to the performance of the hand-labeled formants in a gender-dependent experiment, but was inferior to the hand-labeled formants in a gender-independent experiment. The results that were obtained in noisy acoustic conditions indicated that the formant-like features used in this study are not inherently noise robust. For clean and noisy data as well as for the gender-dependent and gender-independent experiments the MFCCs achieved the same or superior results as the formant features, but at the price of a much higher feature dimensionality.


Assuntos
Fonética , Acústica da Fala , Algoritmos , Bases de Dados Factuais , Análise Discriminante , Feminino , Humanos , Masculino , Cadeias de Markov , Modelos Biológicos , Ruído , Fatores Sexuais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA