Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(11): e2100600119, 2022 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-35263217

RESUMEN

SignificanceIn this work, we explore the hypothesis that biological neural networks optimize their architecture, through evolution, for learning. We study early olfactory circuits of mammals and insects, which have relatively similar structure but a huge diversity in size. We approximate these circuits as three-layer networks and estimate, analytically, the scaling of the optimal hidden-layer size with input-layer size. We find that both longevity and information in the genome constrain the hidden-layer size, so a range of allometric scalings is possible. However, the experimentally observed allometric scalings in mammals and insects are consistent with biologically plausible values. This analysis should pave the way for a deeper understanding of both biological and artificial networks.


Asunto(s)
Insectos , Aprendizaje , Mamíferos , Modelos Neurológicos , Vías Olfatorias , Animales , Evolución Biológica , Recuento de Células , Aprendizaje/fisiología , Cuerpos Pedunculados/citología , Redes Neurales de la Computación , Neuronas/citología , Vías Olfatorias/citología , Vías Olfatorias/crecimiento & desarrollo , Corteza Piriforme/citología
2.
PLoS Comput Biol ; 18(1): e1009808, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35100264

RESUMEN

Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in studies of sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose an algorithm that provably reaches the MAP (maximum a posteriori) inference solution, but does so using sparse connectivity. Our algorithm is inspired by the circuit of the mouse olfactory bulb, but our approach is general enough to apply to other modalities. In addition, it should be possible to extend it to nonlinear encoding models.


Asunto(s)
Algoritmos , Células Receptoras Sensoriales/fisiología , Potenciales de Acción/fisiología , Animales , Ratones , Dinámicas no Lineales
3.
PLoS Comput Biol ; 13(4): e1005497, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28419098

RESUMEN

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can-in some cases-optimize robustness against noise.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Células Receptoras Sensoriales/fisiología , Biología Computacional , Simulación por Computador
4.
PLoS Comput Biol ; 12(12): e1005110, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27997544

RESUMEN

Zipf's law, which states that the probability of an observation is inversely proportional to its rank, has been observed in many domains. While there are models that explain Zipf's law in each of them, those explanations are typically domain specific. Recently, methods from statistical physics were used to show that a fairly broad class of models does provide a general explanation of Zipf's law. This explanation rests on the observation that real world data is often generated from underlying causes, known as latent variables. Those latent variables mix together multiple models that do not obey Zipf's law, giving a model that does. Here we extend that work both theoretically and empirically. Theoretically, we provide a far simpler and more intuitive explanation of Zipf's law, which at the same time considerably extends the class of models to which this explanation can apply. Furthermore, we also give methods for verifying whether this explanation applies to a particular dataset. Empirically, these advances allowed us extend this explanation to important classes of data, including word frequencies (the first domain in which Zipf's law was discovered), data with variable sequence length, and multi-neuron spiking activity.


Asunto(s)
Modelos Teóricos , Potenciales de Acción , Bases de Datos Factuales , Entropía , Lenguaje , Modelos Neurológicos
5.
PLoS Comput Biol ; 11(10): e1004519, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26517475

RESUMEN

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.


Asunto(s)
Teorema de Bayes , Conducta de Elección , Técnicas de Apoyo para la Decisión , Heurística , Modelos Estadísticos , Percepción Visual , Simulación por Computador , Humanos
6.
Nature ; 466(7302): 123-7, 2010 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-20596024

RESUMEN

It is well known that neural activity exhibits variability, in the sense that identical sensory stimuli produce different responses, but it has been difficult to determine what this variability means. Is it noise, or does it carry important information-about, for example, the internal state of the organism? Here we address this issue from the bottom up, by asking whether small perturbations to activity in cortical networks are amplified. Based on in vivo whole-cell patch-clamp recordings in rat barrel cortex, we find that a perturbation consisting of a single extra spike in one neuron produces approximately 28 additional spikes in its postsynaptic targets. We also show, using simultaneous intra- and extracellular recordings, that a single spike in a neuron produces a detectable increase in firing rate in the local network. Theoretical analysis indicates that this amplification leads to intrinsic, stimulus-independent variations in membrane potential of the order of +/-2.2-4.5 mV-variations that are pure noise, and so carry no information at all. Therefore, for the brain to perform reliable computations, it must either use a rate code, or generate very large, fast depolarizing events, such as those proposed by the theory of synfire chains. However, in our in vivo recordings, we found that such events were very rare. Our findings are thus consistent with the idea that cortex is likely to use primarily a rate code.


Asunto(s)
Corteza Cerebral/fisiología , Modelos Neurológicos , Potenciales de Acción/fisiología , Animales , Artefactos , Corteza Cerebral/citología , Neuronas/metabolismo , Técnicas de Placa-Clamp , Probabilidad , Ratas , Ratas Sprague-Dawley , Procesos Estocásticos
8.
J Comput Neurosci ; 36(3): 469-81, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24091644

RESUMEN

We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.


Asunto(s)
Potenciales de Acción/fisiología , Simulación por Computador , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Inhibición Neural/fisiología , Sinapsis/fisiología
9.
Conscious Cogn ; 26: 13-23, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24650632

RESUMEN

In a range of contexts, individuals arrive at collective decisions by sharing confidence in their judgements. This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the 'confidence heuristic'. We tested two ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: either directly, by opting for the judgement made with higher confidence, or indirectly, by opting for the faster judgement, exploiting an inverse correlation between confidence and reaction time. We found that the success of these heuristics depends on how similar individuals are in terms of the reliability of their judgements and, more importantly, that for dissimilar individuals such heuristics are dramatically inferior to interaction. Interaction allows individuals to alleviate, but not fully resolve, differences in the reliability of their judgements. We discuss the implications of these findings for models of confidence and collective decision-making.


Asunto(s)
Toma de Decisiones/fisiología , Relaciones Interpersonales , Juicio/fisiología , Negociación , Adulto , Humanos , Masculino , Negociación/psicología , Adulto Joven
10.
Neural Comput ; 25(6): 1408-39, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23517097

RESUMEN

The brain is easily able to process and categorize complex time-varying signals. For example, the two sentences, "It is cold in London this time of year" and "It is hot in London this time of year," have different meanings, even though the words hot and cold appear several seconds before the ends of the two sentences. Any network that can tell these sentences apart must therefore have a long temporal memory. In other words, the current state of the network must depend on events that happened several seconds ago. This is a difficult task, as neurons are dominated by relatively short time constants--tens to hundreds of milliseconds. Nevertheless, it was recently proposed that randomly connected networks could exhibit the long memories necessary for complex temporal processing. This is an attractive idea, both for its simplicity and because little tuning of recurrent synaptic weights is required. However, we show that when connectivity is high, as it is in the mammalian brain, randomly connected networks cannot exhibit temporal memory much longer than the time constants of their constituent neurons.


Asunto(s)
Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Dinámicas no Lineales , Potenciales de Acción/fisiología , Encéfalo/citología , Encéfalo/fisiología , Humanos , Modelos Lineales , Memoria a Largo Plazo/fisiología , Redes Neurales de la Computación , Probabilidad , Factores de Tiempo
11.
J Neurosci ; 31(43): 15310-9, 2011 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-22031877

RESUMEN

A wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. This computation comes up in seemingly unrelated tasks, including causal reasoning, odor recognition, motor control, visual tracking, coordinate transformations, visual search, decision making, and object recognition, to name just a few. The question we address here is: how could neural circuits implement such marginalizations? We show that when spike trains exhibit a particular type of statistics--associated with constant Fano factors and gain-invariant tuning curves, as is often reported in vivo--some of the more common marginalizations can be achieved with networks that implement a quadratic nonlinearity and divisive normalization, the latter being a type of nonlinear lateral inhibition that has been widely reported in neural circuits. Previous studies have implicated divisive normalization in contrast gain control and attentional modulation. Our results raise the possibility that it is involved in yet another, highly critical, computation: near optimal marginalization in a remarkably wide range of tasks.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Distribución Normal , Potenciales de Acción/fisiología , Simulación por Computador , Humanos , Probabilidad
12.
Proc Natl Acad Sci U S A ; 106(14): 5936-41, 2009 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-19297621

RESUMEN

The subject of neural coding has generated much debate. A key issue is whether the nervous system uses coarse or fine coding. Each has different strengths and weaknesses and, therefore, different implications for how the brain computes. For example, the strength of coarse coding is that it is robust to fluctuations in spike arrival times; downstream neurons do not have to keep track of the details of the spike train. The weakness, though, is that individual cells cannot carry much information, so downstream neurons have to pool signals across cells and/or time to obtain enough information to represent the sensory world and guide behavior. In contrast, with fine coding, individual cells can carry much more information, but downstream neurons have to resolve spike train structure to obtain it. Here, we set up a strategy to determine which codes are viable, and we apply it to the retina as a model system. We recorded from all the retinal output cells an animal uses to solve a task, evaluated the cells' spike trains for as long as the animal evaluates them, and used optimal, i.e., Bayesian, decoding. This approach makes it possible to obtain an upper bound on the performance of codes and thus eliminate those that are insufficient, that is, those that cannot account for behavioral performance. Our results show that standard coarse coding (spike count coding) is insufficient; finer, more information-rich codes are necessary.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Retina/fisiología , Transmisión Sináptica/fisiología , Animales , Electrofisiología , Ratones , Dinámicas no Lineales , Factores de Tiempo
13.
Curr Biol ; 18(8): R349-51, 2008 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-18430638

RESUMEN

The brain exhibits coherent, long-range oscillations, and it now appears that these oscillations play a substantial role in neural coding: they can boost the information contained in action potentials by as much as 50%.


Asunto(s)
Potenciales de Acción/fisiología , Relojes Biológicos/fisiología , Corteza Visual/fisiología , Animales , Macaca
14.
Nat Neurosci ; 24(4): 565-571, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33707754

RESUMEN

Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.


Asunto(s)
Encéfalo/fisiología , Aprendizaje/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Algoritmos , Animales , Teorema de Bayes , Humanos
15.
PLoS Comput Biol ; 5(5): e1000380, 2009 May.
Artículo en Inglés | MEDLINE | ID: mdl-19424487

RESUMEN

One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point.


Asunto(s)
Modelos Biológicos , Modelos Estadísticos , Biología de Sistemas/métodos , Algoritmos , Simulación por Computador , Entropía
16.
Nat Neurosci ; 9(11): 1432-8, 2006 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17057707

RESUMEN

Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes' rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poisson-like variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.


Asunto(s)
Teorema de Bayes , Corteza Cerebral/fisiología , Modelos Neurológicos , Modelos Estadísticos , Red Nerviosa/fisiología , Algoritmos , Humanos , Red Nerviosa/citología , Distribución Normal , Distribución de Poisson
17.
Nat Commun ; 11(1): 3845, 2020 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-32737295

RESUMEN

Many experimental studies suggest that animals can rapidly learn to identify odors and predict the rewards associated with them. However, the underlying plasticity mechanism remains elusive. In particular, it is not clear how olfactory circuits achieve rapid, data efficient learning with local synaptic plasticity. Here, we formulate olfactory learning as a Bayesian optimization process, then map the learning rules into a computational model of the mammalian olfactory circuit. The model is capable of odor identification from a small number of observations, while reproducing cellular plasticity commonly observed during development. We extend the framework to reward-based learning, and show that the circuit is able to rapidly learn odor-reward association with a plausible neural architecture. These results deepen our theoretical understanding of unsupervised learning in the mammalian brain.


Asunto(s)
Condicionamiento Clásico/fisiología , Red Nerviosa , Plasticidad Neuronal/fisiología , Vías Olfatorias/fisiología , Percepción Olfatoria/fisiología , Olfato/fisiología , Animales , Teorema de Bayes , Simulación por Computador , Mamíferos , Neuronas/citología , Neuronas/fisiología , Odorantes/análisis , Bulbo Olfatorio/fisiología , Recompensa
18.
Trends Neurosci ; 43(6): 363-372, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32459990

RESUMEN

More often than not, action potentials fail to trigger neurotransmitter release. And even when neurotransmitter is released, the resulting change in synaptic conductance is highly variable. Given the energetic cost of generating and propagating action potentials, and the importance of information transmission across synapses, this seems both wasteful and inefficient. However, synaptic noise arising from variable transmission can improve, in certain restricted conditions, information transmission. Under broader conditions, it can improve information transmission per release, a quantity that is relevant given the energetic constraints on computing in the brain. Here we discuss the role, both positive and negative, synaptic noise plays in information transmission and computation in the brain.


Asunto(s)
Sinapsis , Transmisión Sináptica , Potenciales de Acción , Humanos , Neurotransmisores
19.
Neuron ; 105(1): 165-179.e8, 2020 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-31753580

RESUMEN

Inhibitory neurons, which play a critical role in decision-making models, are often simplified as a single pool of non-selective neurons lacking connection specificity. This assumption is supported by observations in the primary visual cortex: inhibitory neurons are broadly tuned in vivo and show non-specific connectivity in slice. The selectivity of excitatory and inhibitory neurons within decision circuits and, hence, the validity of decision-making models are unknown. We simultaneously measured excitatory and inhibitory neurons in the posterior parietal cortex of mice judging multisensory stimuli. Surprisingly, excitatory and inhibitory neurons were equally selective for the animal's choice, both at the single-cell and population level. Further, both cell types exhibited similar changes in selectivity and temporal dynamics during learning, paralleling behavioral improvements. These observations, combined with modeling, argue against circuit architectures assuming non-selective inhibitory neurons. Instead, they argue for selective subnetworks of inhibitory and excitatory neurons that are shaped by experience to support expert decision-making.


Asunto(s)
Toma de Decisiones/fisiología , Aprendizaje/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Animales , Glutamato Descarboxilasa/genética , Ratones , Ratones Transgénicos , Modelos Neurológicos , Inhibición Neural/fisiología , Lóbulo Parietal/fisiología
20.
PLoS Comput Biol ; 3(9): 1679-700, 2007 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-17845070

RESUMEN

A fundamental problem in neuroscience is understanding how working memory--the ability to store information at intermediate timescales, like tens of seconds--is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.


Asunto(s)
Potenciales de Acción/fisiología , Memoria/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Transmisión Sináptica/fisiología , Simulación por Computador , Almacenamiento y Recuperación de la Información/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA