Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 12(1): 21870, 2022 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-36536058

RESUMEN

The power spectrum of brain activity is composed by peaks at characteristic frequencies superimposed to a background that decays as a power law of the frequency, [Formula: see text], with an exponent [Formula: see text] close to 1 (pink noise). This exponent is predicted to be connected with the exponent [Formula: see text] related to the scaling of the average size with the duration of avalanches of activity. "Mean field" models of neural dynamics predict exponents [Formula: see text] and [Formula: see text] equal or near 2 at criticality (brown noise), including the simple branching model and the fully-connected stochastic Wilson-Cowan model. We here show that a 2D version of the stochastic Wilson-Cowan model, where neuron connections decay exponentially with the distance, is characterized by exponents [Formula: see text] and [Formula: see text] markedly different from those of mean field, respectively around 1 and 1.3. The exponents [Formula: see text] and [Formula: see text] of avalanche size and duration distributions, equal to 1.5 and 2 in mean field, decrease respectively to [Formula: see text] and [Formula: see text]. This seems to suggest the possibility of a different universality class for the model in finite dimension.

2.
Neural Netw ; 13(6): 651-65, 2000 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-10987518

RESUMEN

In this paper a neural network for approximating continuous and discontinuous mappings is described. The activation functions of the hidden nodes are the Radial Basis Functions (RBF) whose variances are learnt by means of an evolutionary optimization strategy. A new incremental learning strategy is used in order to improve the net performances. The learning strategy is able to save computational time because of the selective growing of the net structure and the capability of the learning algorithm to keep the effects of the activation functions local. Further, it does not require high order derivatives. An analysis of the learning capabilities and a comparison of the net performances with other approaches reported in literature have been performed. It is shown that the resulting network improves the approximation results reported for continuous mappings and for those exhibiting a finite number of discontinuities.


Asunto(s)
Algoritmos , Mapeo Encefálico , Redes Neurales de la Computación , Modelos Neurológicos
3.
Neural Netw ; 13(7): 719-29, 2000 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-11152204

RESUMEN

The on-line learning of Radial Basis Function neural networks (RBFNs) is analyzed. Our approach makes use of a master equation that describes the dynamics of the weight space probability density. An approximate solution of the master equation is obtained in the limit of a small learning rate. In this limit, the on line learning dynamics is analyzed and it is shown that, since fluctuations are small, dynamics can be well described in terms of evolution of the mean. This allows us to analyze the learning process of RBFNs in which the number of hidden nodes K is larger than the typically small number of input nodes N. The work represents a complementary analysis of on-line RBFNs, with respect to the previous works (Phys. Rev. E 56 (1997a) 907; Neur. Comput. 9 (1997) 1601), in which RBFNs with N >> K have been analyzed. The generalization error equation and the equations of motion of the weights are derived for generic RBF architectures, and numerically integrated in specific cases. Analytical results are then confirmed by numerical simulations. Unlike the case of large N > K we find that the dynamics in the case N < K is not affected by the problems of symmetric phases and subsequent symmetry breaking.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Sistemas en Línea , Procesos Estocásticos
4.
Artículo en Inglés | MEDLINE | ID: mdl-11970491

RESUMEN

The dynamics of on-line learning is investigated for structurally unrealizable tasks in the context of two-layer neural networks with an arbitrary number of hidden neurons. Within a statistical mechanics framework, a closed set of differential equations describing the learning dynamics can be derived, for the general case of unrealizable isotropic tasks. In the asymptotic regime one can solve the dynamics analytically in the limit of a large number of hidden neurons, providing an analytical expression for the residual generalization error, the optimal and critical asymptotic training parameters, and the corresponding prefactor of the generalization error decay.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA