Your browser doesn't support javascript.
loading
A Universal Approximation Theorem for Mixture-of-Experts Models.
Nguyen, Hien D; Lloyd-Jones, Luke R; McLachlan, Geoffrey J.
Afiliación
  • Nguyen HD; School of Mathematics and Physics, University of Queensland, Brisbane, Queensland 4072, Australia hien1988@gmail.com.
  • Lloyd-Jones LR; Centre for Neurogenetics and Statistical Genetics, Queensland Brain Institute, University of Queensland, Brisbane, Queensland 4072, Australia l.lloydjones@uq.edu.au.
  • McLachlan GJ; School of Mathematics and Physics, University of Queensland, Brisbane, Queensland 4072, Australia g.mclachlan@uq.edu.au.
Neural Comput ; 28(12): 2585-2593, 2016 12.
Article en En | MEDLINE | ID: mdl-27626962
The mixture-of-experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from a Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alternative result, which shows that the class of MoE mean functions is dense in the class of all continuous functions over arbitrary compact domains of estimation. Our result can be viewed as a universal approximation theorem for MoE models. The theorem we present allows MoE users to be confident in applying such models for estimation when data arise from nonlinear and nondifferentiable generative processes.
Buscar en Google
Bases de datos: MEDLINE Idioma: En Revista: Neural Comput Asunto de la revista: INFORMATICA MEDICA Año: 2016 Tipo del documento: Article País de afiliación: Australia
Buscar en Google
Bases de datos: MEDLINE Idioma: En Revista: Neural Comput Asunto de la revista: INFORMATICA MEDICA Año: 2016 Tipo del documento: Article País de afiliación: Australia