Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cell Rep ; 42(8): 112908, 2023 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-37516963

RESUMO

Fear responses are functionally adaptive behaviors that are strengthened as memories. Indeed, detailed knowledge of the neural circuitry modulating fear memory could be the turning point for the comprehension of this emotion and its pathological states. A comprehensive understanding of the circuits mediating memory encoding, consolidation, and retrieval presents the fundamental technological challenge of analyzing activity in the entire brain with single-neuron resolution. In this context, we develop the brain-wide neuron quantification toolkit (BRANT) for mapping whole-brain neuronal activation at micron-scale resolution, combining tissue clearing, high-resolution light-sheet microscopy, and automated image analysis. The robustness and scalability of this method allow us to quantify the evolution of activity patterns across multiple phases of memory in mice. This approach highlights a strong sexual dimorphism in recruited circuits, which has no counterpart in the behavior. The methodology presented here paves the way for a comprehensive characterization of the evolution of fear memory.


Assuntos
Encéfalo , Caracteres Sexuais , Camundongos , Animais , Encéfalo/fisiologia , Medo/fisiologia , Neurônios/fisiologia
2.
Sci Rep ; 12(1): 11201, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35778586

RESUMO

Training of neural networks can be reformulated in spectral space, by allowing eigenvalues and eigenvectors of the network to act as target of the optimization instead of the individual weights. Working in this setting, we show that the eigenvalues can be used to rank the nodes' importance within the ensemble. Indeed, we will prove that sorting the nodes based on their associated eigenvalues, enables effective pre- and post-processing pruning strategies to yield massively compacted networks (in terms of the number of composing neurons) with virtually unchanged performance. The proposed methods are tested for different architectures, with just a single or multiple hidden layers, and against distinct classification tasks of general interest.


Assuntos
Redes Neurais de Computação
3.
Phys Rev E ; 104(5-1): 054312, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34942751

RESUMO

Deep neural networks can be trained in reciprocal space by acting on the eigenvalues and eigenvectors of suitable transfer operators in direct space. Adjusting the eigenvalues while freezing the eigenvectors yields a substantial compression of the parameter space. This latter scales by definition with the number of computing neurons. The classification scores as measured by the displayed accuracy are, however, inferior to those attained when the learning is carried in direct space for an identical architecture and by employing the full set of trainable parameters (with a quadratic dependence on the size of neighbor layers). In this paper, we propose a variant of the spectral learning method as in Giambagli et al. [Nat. Commun. 12, 1330 (2021)2041-172310.1038/s41467-021-21481-0], which leverages on two sets of eigenvalues for each mapping between adjacent layers. The eigenvalues act as veritable knobs which can be freely tuned so as to (1) enhance, or alternatively silence, the contribution of the input nodes and (2) modulate the excitability of the receiving nodes with a mechanism which we interpret as the artificial analog of the homeostatic plasticity. The number of trainable parameters is still a linear function of the network size, but the performance of the trained device gets much closer to those obtained via conventional algorithms, these latter requiring, however, a considerably heavier computational cost. The residual gap between conventional and spectral trainings can be eventually filled by employing a suitable decomposition for the nontrivial block of the eigenvectors matrix. Each spectral parameter reflects back on the whole set of internode weights, an attribute which we effectively exploit to yield sparse networks with stunning classification abilities as compared to their homologs trained with conventional means.

4.
J Comput Neurosci ; 49(2): 159-174, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33826050

RESUMO

An inverse procedure is developed and tested to recover functional and structural information from global signals of brains activity. The method assumes a leaky-integrate and fire model with excitatory and inhibitory neurons, coupled via a directed network. Neurons are endowed with a heterogenous current value, which sets their associated dynamical regime. By making use of a heterogenous mean-field approximation, the method seeks to reconstructing from global activity patterns the distribution of in-coming degrees, for both excitatory and inhibitory neurons, as well as the distribution of the assigned currents. The proposed inverse scheme is first validated against synthetic data. Then, time-lapse acquisitions of a zebrafish larva recorded with a two-photon light sheet microscope are used as an input to the reconstruction algorithm. A power law distribution of the in-coming connectivity of the excitatory neurons is found. Local degree distributions are also computed by segmenting the whole brain in sub-regions traced from annotated atlas.


Assuntos
Modelos Neurológicos , Peixe-Zebra , Algoritmos , Animais , Neurônios
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA