Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Phys Rev Lett ; 130(18): 187401, 2023 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-37204901

RESUMEN

Topological signals, i.e., dynamical variables defined on nodes, links, triangles, etc. of higher-order networks, are attracting increasing attention. However, the investigation of their collective phenomena is only at its infancy. Here we combine topology and nonlinear dynamics to determine the conditions for global synchronization of topological signals defined on simplicial or cell complexes. On simplicial complexes we show that topological obstruction impedes odd dimensional signals to globally synchronize. On the other hand, we show that cell complexes can overcome topological obstruction and in some structures signals of any dimension can achieve global synchronization.

2.
Phys Rev E ; 106(6-1): 064314, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36671168

RESUMEN

The study of reaction-diffusion systems on networks is of paramount relevance for the understanding of nonlinear processes in systems where the topology is intrinsically discrete, such as the brain. Until now, reaction-diffusion systems have been studied only when species are defined on the nodes of a network. However, in a number of real systems including, e.g., the brain and the climate, dynamical variables are not only defined on nodes but also on links, faces, and higher-dimensional cells of simplicial or cell complexes, leading to topological signals. In this work, we study reaction-diffusion processes of topological signals coupled through the Dirac operator. The Dirac operator allows topological signals of different dimension to interact or cross-diffuse as it projects the topological signals defined on simplices or cells of a given dimension to simplices or cells of one dimension up or one dimension down. By focusing on the framework involving nodes and links, we establish the conditions for the emergence of Turing patterns and we show that the latter are never localized only on nodes or only on links of the network. Moreover, when the topological signals display a Turing pattern their projection does as well. We validate the theory hereby developed on a benchmark network model and on square lattices with periodic boundary conditions.


Asunto(s)
Difusión , Dinámicas no Lineales
3.
Sci Rep ; 12(1): 11201, 2022 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-35778586

RESUMEN

Training of neural networks can be reformulated in spectral space, by allowing eigenvalues and eigenvectors of the network to act as target of the optimization instead of the individual weights. Working in this setting, we show that the eigenvalues can be used to rank the nodes' importance within the ensemble. Indeed, we will prove that sorting the nodes based on their associated eigenvalues, enables effective pre- and post-processing pruning strategies to yield massively compacted networks (in terms of the number of composing neurons) with virtually unchanged performance. The proposed methods are tested for different architectures, with just a single or multiple hidden layers, and against distinct classification tasks of general interest.


Asunto(s)
Redes Neurales de la Computación
4.
Nat Commun ; 12(1): 1330, 2021 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-33637729

RESUMEN

Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers. Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances that are superior to those attained with standard methods restricted to operate with an identical number of free parameters. To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing.

5.
Phys Rev E ; 104(5-1): 054312, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34942751

RESUMEN

Deep neural networks can be trained in reciprocal space by acting on the eigenvalues and eigenvectors of suitable transfer operators in direct space. Adjusting the eigenvalues while freezing the eigenvectors yields a substantial compression of the parameter space. This latter scales by definition with the number of computing neurons. The classification scores as measured by the displayed accuracy are, however, inferior to those attained when the learning is carried in direct space for an identical architecture and by employing the full set of trainable parameters (with a quadratic dependence on the size of neighbor layers). In this paper, we propose a variant of the spectral learning method as in Giambagli et al. [Nat. Commun. 12, 1330 (2021)2041-172310.1038/s41467-021-21481-0], which leverages on two sets of eigenvalues for each mapping between adjacent layers. The eigenvalues act as veritable knobs which can be freely tuned so as to (1) enhance, or alternatively silence, the contribution of the input nodes and (2) modulate the excitability of the receiving nodes with a mechanism which we interpret as the artificial analog of the homeostatic plasticity. The number of trainable parameters is still a linear function of the network size, but the performance of the trained device gets much closer to those obtained via conventional algorithms, these latter requiring, however, a considerably heavier computational cost. The residual gap between conventional and spectral trainings can be eventually filled by employing a suitable decomposition for the nontrivial block of the eigenvectors matrix. Each spectral parameter reflects back on the whole set of internode weights, an attribute which we effectively exploit to yield sparse networks with stunning classification abilities as compared to their homologs trained with conventional means.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA