Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 22(11)2020 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-33287062

RESUMO

This study examined the extreme learning machine (ELM) applied to the Wald test statistic for the model specification of the conditional mean, which we call the WELM testing procedure. The omnibus test statistics available in the literature weakly converge to a Gaussian stochastic process under the null that the model is correct, and this makes their application inconvenient. By contrast, the WELM testing procedure is straightforwardly applicable when detecting model misspecification. We applied the WELM testing procedure to the sequential testing procedure formed by a set of polynomial models and estimate an approximate conditional expectation. We then conducted extensive Monte Carlo experiments to evaluate the performance of the sequential WELM testing procedure and verify that it consistently estimates the most parsimonious conditional mean when the set of polynomial models contains a correctly specified model. Otherwise, it consistently rejects all the models in the set.

2.
Neural Comput ; 24(1): 273-87, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22023196

RESUMO

We illustrate the need to use higher-order (specifically sixth-order) expansions in order to properly determine the asymptotic distribution of a standard artificial neural network test for neglected nonlinearity. The test statistic is a quasi-likelihood ratio (QLR) statistic designed to test whether the mean square prediction error improves by including an additional hidden unit with an activation function violating the no-zero condition in Cho, Ishida, and White (2011). This statistic is also shown to be asymptotically equivalent under the null to the Lagrange multiplier (LM) statistic of Luukkonen, Saikkonen, and Teräsvirta (1988) and Teräsvirta (1994). In addition, we compare the power properties of our QLR test to one satisfying the no-zero condition and find that the latter is not consistent for detecting a DGP with neglected nonlinearity violating an analogous no-zero condition, whereas our QLR test is consistent.


Assuntos
Algoritmos , Redes Neurais de Computação , Dinâmica não Linear , Simulação por Computador , Funções Verossimilhança , Modelos Estatísticos , Modelos Teóricos , Método de Monte Carlo
3.
Neural Comput ; 23(5): 1133-86, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21299425

RESUMO

Tests for regression neglected nonlinearity based on artificial neural networks (ANNs) have so far been studied by separately analyzing the two ways in which the null of regression linearity can hold. This implies that the asymptotic behavior of general ANN-based tests for neglected nonlinearity is still an open question. Here we analyze a convenient ANN-based quasi-likelihood ratio statistic for testing neglected nonlinearity, paying careful attention to both components of the null. We derive the asymptotic null distribution under each component separately and analyze their interaction. Somewhat remarkably, it turns out that the previously known asymptotic null distribution for the type 1 case still applies, but under somewhat stronger conditions than previously recognized. We present Monte Carlo experiments corroborating our theoretical results and showing that standard methods can yield misleading inference when our new, stronger regularity conditions are violated.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Dinâmica não Linear , Algoritmos , Simulação por Computador/normas , Conceitos Matemáticos , Modelos Teóricos , Método de Monte Carlo , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA