Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(40): e2201854119, 2022 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-36161906

RESUMEN

Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry, and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully connected network has so far proven elusive. Here we show how initially fully connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localized, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task. By carefully designing data models for the visual scene, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs, which has long been recognized as the hallmark of natural images. We provide an analytical and numerical characterization of the pattern formation mechanism responsible for this phenomenon in a simple model and find an unexpected link between receptive field formation and tensor decomposition of higher-order input correlations. These results provide a perspective on the development of low-level feature detectors in various sensory modalities and pave the way for studying the impact of higher-order statistics on learning in neural networks.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Neurociencias
2.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Artículo en Inglés | MEDLINE | ID: mdl-34312253

RESUMEN

Contact tracing is an essential tool to mitigate the impact of a pandemic, such as the COVID-19 pandemic. In order to achieve efficient and scalable contact tracing in real time, digital devices can play an important role. While a lot of attention has been paid to analyzing the privacy and ethical risks of the associated mobile applications, so far much less research has been devoted to optimizing their performance and assessing their impact on the mitigation of the epidemic. We develop Bayesian inference methods to estimate the risk that an individual is infected. This inference is based on the list of his recent contacts and their own risk levels, as well as personal information such as results of tests or presence of syndromes. We propose to use probabilistic risk estimation to optimize testing and quarantining strategies for the control of an epidemic. Our results show that in some range of epidemic spreading (typically when the manual tracing of all contacts of infected people becomes practically impossible but before the fraction of infected people reaches the scale where a lockdown becomes unavoidable), this inference of individuals at risk could be an efficient way to mitigate the epidemic. Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact. Such communication may be encrypted and anonymized, and thus, it is compatible with privacy-preserving standards. We conclude that probabilistic risk estimation is capable of enhancing the performance of digital contact tracing and should be considered in the mobile applications.


Asunto(s)
Trazado de Contacto/métodos , Epidemias/prevención & control , Algoritmos , Teorema de Bayes , COVID-19/epidemiología , COVID-19/prevención & control , Trazado de Contacto/estadística & datos numéricos , Humanos , Aplicaciones Móviles , Privacidad , Medición de Riesgo , SARS-CoV-2
3.
PLoS Comput Biol ; 18(12): e1010590, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36469504

RESUMEN

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.


Asunto(s)
Modelos Neurológicos , Red Nerviosa , Potenciales de Acción/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Aprendizaje
4.
PLoS Comput Biol ; 16(12): e1008536, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33370266

RESUMEN

Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.


Asunto(s)
Aprendizaje , Sinapsis/fisiología , Animales , Modelos Neurológicos , Red Nerviosa
5.
Proc Natl Acad Sci U S A ; 113(48): E7655-E7662, 2016 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-27856745

RESUMEN

In artificial neural networks, learning from data is a computationally demanding task in which a large number of connection weights are iteratively tuned through stochastic-gradient-based heuristic processes over a cost function. It is not well understood how learning occurs in these systems, in particular how they avoid getting trapped in configurations with poor computational performance. Here, we study the difficult case of networks with discrete weights, where the optimization landscape is very rough even for simple architectures, and provide theoretical and numerical evidence of the existence of rare-but extremely dense and accessible-regions of configurations in the network weight space. We define a measure, the robust ensemble (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions. We analytically compute the RE in some exactly solvable models and also provide a general algorithmic scheme that is straightforward to implement: define a cost function given by a sum of a finite number of replicas of the original cost function, with a constraint centering the replicas around a driving assignment. To illustrate this, we derive several powerful algorithms, ranging from Markov Chains to message passing to gradient descent processes, where the algorithms target the robust dense states, resulting in substantial improvements in performance. The weak dependence on the number of precision bits of the weights leads us to conjecture that very similar reasoning applies to more conventional neural networks. Analogous algorithmic schemes can also be applied to other optimization problems.

6.
Phys Rev Lett ; 115(12): 128101, 2015 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-26431018

RESUMEN

We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in single layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.

7.
Phys Rev E ; 109(1-1): 014132, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38366483

RESUMEN

The cost of information processing in physical systems calls for a trade-off between performance and energetic expenditure. Here we formulate and study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices. Using both real data sets and synthetic tasks, we show how nonequilibrium leads to enhanced performance. Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by nonreciprocal interactions.

8.
Neuron ; 108(6): 1181-1193.e8, 2020 12 23.
Artículo en Inglés | MEDLINE | ID: mdl-33301712

RESUMEN

Context guides perception by influencing stimulus saliency. Accordingly, in visual cortex, responses to a stimulus are modulated by context, the visual scene surrounding the stimulus. Responses are suppressed when stimulus and surround are similar but not when they differ. The underlying mechanisms remain unclear. Here, we use optical recordings, manipulations, and computational modeling to show that disinhibitory circuits consisting of vasoactive intestinal peptide (VIP)-expressing and somatostatin (SOM)-expressing inhibitory neurons modulate responses in mouse visual cortex depending on similarity between stimulus and surround, primarily by modulating recurrent excitation. When stimulus and surround are similar, VIP neurons are inactive, and activity of SOM neurons leads to suppression of excitatory neurons. However, when stimulus and surround differ, VIP neurons are active, inhibiting SOM neurons, which leads to relief of excitatory neurons from suppression. We have identified a canonical cortical disinhibitory circuit that contributes to contextual modulation and may regulate perceptual saliency.


Asunto(s)
Inhibición Neural/fisiología , Neuronas/metabolismo , Corteza Visual/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología , Animales , Calcio/metabolismo , Ratones , Modelos Neurológicos , Estimulación Luminosa , Somatostatina/metabolismo , Péptido Intestinal Vasoactivo/metabolismo , Corteza Visual/metabolismo , Vías Visuales/metabolismo
9.
PLoS One ; 14(8): e0220547, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31393909

RESUMEN

The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of separate excitatory and inhibitory neurons obeying Dale's law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale's law and response variability.


Asunto(s)
Algoritmos , Simulación por Computador , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Animales , Humanos
10.
J R Soc Interface ; 16(151): 20180844, 2019 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-30958195

RESUMEN

Accessing the network through which a propagation dynamics diffuses is essential for understanding and controlling it. In a few cases, such information is available through direct experiments or thanks to the very nature of propagation data. In a majority of cases however, available information about the network is indirect and comes from partial observations of the dynamics, rendering the network reconstruction a fundamental inverse problem. Here we show that it is possible to reconstruct the whole structure of an interaction network and to simultaneously infer the complete time course of activation spreading, relying just on single epoch (i.e. snapshot) or time-scattered observations of a small number of activity cascades. The method that we present is built on a belief propagation approximation, that has shown impressive accuracy in a wide variety of relevant cases, and is able to infer interactions in the presence of incomplete time-series data by providing a detailed modelling of the posterior distribution of trajectories conditioned to the observations. Furthermore, we show by experiments that the information content of full cascades is relatively smaller than that of sparse observations or single snapshots.


Asunto(s)
Algoritmos , Biología Computacional , Infecciones/epidemiología , Modelos Biológicos
11.
Interface Focus ; 8(6): 20180033, 2018 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-30443331

RESUMEN

Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and that shows a number of remarkable features. Our delayed-correlations matching (DCM) rule satisfies some basic requirements for biological feasibility: finite and noisy afferent signals, Dale's principle and asymmetry of synaptic connections, locality of the weight update computations. Nevertheless, the DCM rule is capable of storing a large, extensive number of patterns as attractors in a stochastic recurrent neural network, under general scenarios without requiring any modification: it can deal with correlated patterns, a broad range of architectures (with or without hidden neuronal states), one-shot learning with the palimpsest property, all the while avoiding the proliferation of spurious attractors. When hidden units are present, our learning rule can be employed to construct Boltzmann machine-like generative models, exploiting the addition of hidden neurons in feature extraction and classification tasks.

12.
Sci Rep ; 6: 27538, 2016 06 10.
Artículo en Inglés | MEDLINE | ID: mdl-27283451

RESUMEN

Investigating into the past history of an epidemic outbreak is a paramount problem in epidemiology. Based on observations about the state of individuals, on the knowledge of the network of contacts and on a mathematical model for the epidemic process, the problem consists in describing some features of the posterior distribution of unobserved past events, such as the source, potential transmissions, and undetected positive cases. Several methods have been proposed for the study of these inference problems on discrete-time, synchronous epidemic models on networks, including naive Bayes, centrality measures, accelerated Monte-Carlo approaches and Belief Propagation. However, most traced real networks consist of short-time contacts on continuous time. A possibility that has been adopted is to discretize time line into identical intervals, a method that becomes more and more precise as the length of the intervals vanishes. Unfortunately, the computational time of the inference methods increase with the number of intervals, turning a sufficiently precise inference procedure often impractical. We show here an extension of the Belief Propagation method that is able to deal with a model of continuous-time events, without resorting to time discretization. We also investigate the effect of time discretization on the quality of the inference.


Asunto(s)
Biología Computacional , Brotes de Enfermedades , Epidemias , Algoritmos , Teorema de Bayes , Redes Reguladoras de Genes , Humanos , Modelos Teóricos , Método de Montecarlo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA