Your browser doesn't support javascript.
loading
Optimal Randomness for Stochastic Configuration Network (SCN) with Heavy-Tailed Distributions.
Niu, Haoyu; Wei, Jiamin; Chen, YangQuan.
Afiliación
  • Niu H; Electrical Engineering and Computer Science Department, University of California, Merced, CA 95340, USA.
  • Wei J; School of Telecommunications Engineering, Xidian University, No.2, Taibai Road, Xi'an 710071, Shaanxi, China.
  • Chen Y; Electrical Engineering and Computer Science Department, University of California, Merced, CA 95340, USA.
Entropy (Basel) ; 23(1)2020 Dec 31.
Article en En | MEDLINE | ID: mdl-33396383
ABSTRACT
Stochastic Configuration Network (SCN) has a powerful capability for regression and classification analysis. Traditionally, it is quite challenging to correctly determine an appropriate architecture for a neural network so that the trained model can achieve excellent performance for both learning and generalization. Compared with the known randomized learning algorithms for single hidden layer feed-forward neural networks, such as Randomized Radial Basis Function (RBF) Networks and Random Vector Functional-link (RVFL), the SCN randomly assigns the input weights and biases of the hidden nodes in a supervisory mechanism. Since the parameters in the hidden layers are randomly generated in uniform distribution, hypothetically, there is optimal randomness. Heavy-tailed distribution has shown optimal randomness in an unknown environment for finding some targets. Therefore, in this research, the authors used heavy-tailed distributions to randomly initialize weights and biases to see if the new SCN models can achieve better performance than the original SCN. Heavy-tailed distributions, such as Lévy distribution, Cauchy distribution, and Weibull distribution, have been used. Since some mixed distributions show heavy-tailed properties, the mixed Gaussian and Laplace distributions were also studied in this research work. Experimental results showed improved performance for SCN with heavy-tailed distributions. For the regression model, SCN-Lévy, SCN-Mixture, SCN-Cauchy, and SCN-Weibull used less hidden nodes to achieve similar performance with SCN. For the classification model, SCN-Mixture, SCN-Lévy, and SCN-Cauchy have higher test accuracy of 91.5%, 91.7% and 92.4%, respectively. Both are higher than the test accuracy of the original SCN.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Tipo de estudio: Clinical_trials / Prognostic_studies Idioma: En Revista: Entropy (Basel) Año: 2020 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Tipo de estudio: Clinical_trials / Prognostic_studies Idioma: En Revista: Entropy (Basel) Año: 2020 Tipo del documento: Article País de afiliación: Estados Unidos