Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters











Publication year range
1.
bioRxiv ; 2024 Aug 07.
Article in English | MEDLINE | ID: mdl-39149380

ABSTRACT

Neural circuits construct internal 'world-models' to guide behavior. The predictive processing framework posits that neural activity signaling sensory predictions and concurrently computing prediction-errors is a signature of those internal models. Here, to understand how the brain generates predictions for complex sensorimotor signals, we investigate the emergence of high-dimensional, multi-modal predictive representations in recurrent networks. We find that robust predictive processing arises in a network with loose excitatory/inhibitory balance. Contrary to previous proposals of functionally specialized cell-types, the network exhibits desegregation of stimulus and prediction-error representations. We confirmed these model predictions by experimentally probing predictive-coding circuits using a rich stimulus-set to violate learned expectations. When constrained by data, our model further reveals and makes concrete testable experimental predictions for the distinct functional roles of excitatory and inhibitory neurons, and of neurons in different layers along a laminar hierarchy, in computing multi-modal predictions. These results together imply that in natural conditions, neural representations of internal models are highly distributed, yet structured to allow flexible readout of behaviorally-relevant information. The generality of our model advances the understanding of computation of internal models across species, by incorporating different types of predictive computations into a unified framework.

2.
Proc Natl Acad Sci U S A ; 121(21): e2316799121, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38753511

ABSTRACT

The mammalian brain implements sophisticated sensory processing algorithms along multilayered ("deep") neural networks. Strategies that insects use to meet similar computational demands, while relying on smaller nervous systems with shallow architectures, remain elusive. Using Drosophila as a model, we uncover the algorithmic role of odor preprocessing by a shallow network of compartmentalized olfactory receptor neurons. Each compartment operates as a ratiometric unit for specific odor-mixtures. This computation arises from a simple mechanism: electrical coupling between two differently sized neurons. We demonstrate that downstream synaptic connectivity is shaped to optimally leverage amplification of a hedonic value signal in the periphery. Furthermore, peripheral preprocessing is shown to markedly improve novel odor classification in a higher brain center. Together, our work highlights a far-reaching functional role of the sensory periphery for downstream processing. By elucidating the implementation of powerful computations by a shallow network, we provide insights into general principles of efficient sensory processing algorithms.


Subject(s)
Odorants , Olfactory Receptor Neurons , Smell , Animals , Odorants/analysis , Olfactory Receptor Neurons/physiology , Smell/physiology , Drosophila melanogaster/physiology , Algorithms , Drosophila/physiology , Olfactory Pathways/physiology , Models, Neurological , Nerve Net/physiology
3.
bioRxiv ; 2023 Jul 25.
Article in English | MEDLINE | ID: mdl-37546820

ABSTRACT

The mammalian brain implements sophisticated sensory processing algorithms along multilayered ('deep') neural-networks. Strategies that insects use to meet similar computational demands, while relying on smaller nervous systems with shallow architectures, remain elusive. Using Drosophila as a model, we uncover the algorithmic role of odor preprocessing by a shallow network of compartmentalized olfactory receptor neurons. Each compartment operates as a ratiometric unit for specific odor-mixtures. This computation arises from a simple mechanism: electrical coupling between two differently-sized neurons. We demonstrate that downstream synaptic connectivity is shaped to optimally leverage amplification of a hedonic value signal in the periphery. Furthermore, peripheral preprocessing is shown to markedly improve novel odor classification in a higher brain center. Together, our work highlights a far-reaching functional role of the sensory periphery for downstream processing. By elucidating the implementation of powerful computations by a shallow network, we provide insights into general principles of efficient sensory processing algorithms.

4.
Neuron ; 111(12): 1858-1875, 2023 06 21.
Article in English | MEDLINE | ID: mdl-37044087

ABSTRACT

The symmetric, lattice-like spatial pattern of grid-cell activity is thought to provide a neuronal global metric for space. This view is compatible with grid cells recorded in empty boxes but inconsistent with data from more naturalistic settings. We review evidence arguing against the global-metric notion, including the distortion and disintegration of the grid pattern in complex and three-dimensional environments. We argue that deviations from lattice symmetry are key for understanding grid-cell function. We propose three possible functions for grid cells, which treat real-world grid distortions as a feature rather than a bug. First, grid cells may constitute a local metric for proximal space rather than a global metric for all space. Second, grid cells could form a metric for subjective action-relevant space rather than physical space. Third, distortions may represent salient locations. Finally, we discuss mechanisms that can underlie these functions. These ideas may transform our thinking about grid cells.


Subject(s)
Grid Cells , Spatial Navigation , Grid Cells/physiology , Entorhinal Cortex/physiology , Benchmarking , Neurons/physiology , Space Perception/physiology , Models, Neurological
5.
Phys Rev Lett ; 129(6): 068101, 2022 Aug 05.
Article in English | MEDLINE | ID: mdl-36018633

ABSTRACT

Fluctuations of synaptic weights, among many other physical, biological, and ecological quantities, are driven by coincident events of two "parent" processes. We propose a multiplicative shot-noise model that can capture the behaviors of a broad range of such natural phenomena, and analytically derive an approximation that accurately predicts its statistics. We apply our results to study the effects of a multiplicative synaptic plasticity rule that was recently extracted from measurements in physiological conditions. Using mean-field theory analysis and network simulations, we investigate how this rule shapes the connectivity and dynamics of recurrent spiking neural networks. The multiplicative plasticity rule is shown to support efficient learning of input stimuli, and it gives a stable, unimodal synaptic-weight distribution with a large fraction of strong synapses. The strong synapses remain stable over long times but do not "run away." Our results suggest that the multiplicative shot-noise offers a new route to understand the tradeoff between flexibility and stability in neural circuits and other dynamic networks.


Subject(s)
Models, Neurological , Plastics , Action Potentials , Neuronal Plasticity , Synapses
6.
Nature ; 609(7925): 119-127, 2022 09.
Article in English | MEDLINE | ID: mdl-36002570

ABSTRACT

Throughout their daily lives, animals and humans often switch between different behaviours. However, neuroscience research typically studies the brain while the animal is performing one behavioural task at a time, and little is known about how brain circuits represent switches between different behaviours. Here we tested this question using an ethological setting: two bats flew together in a long 135 m tunnel, and switched between navigation when flying alone (solo) and collision avoidance as they flew past each other (cross-over). Bats increased their echolocation click rate before each cross-over, indicating attention to the other bat1-9. Hippocampal CA1 neurons represented the bat's own position when flying alone (place coding10-14). Notably, during cross-overs, neurons switched rapidly to jointly represent the interbat distance by self-position. This neuronal switch was very fast-as fast as 100 ms-which could be revealed owing to the very rapid natural behavioural switch. The neuronal switch correlated with the attention signal, as indexed by echolocation. Interestingly, the different place fields of the same neuron often exhibited very different tuning to interbat distance, creating a complex non-separable coding of position by distance. Theoretical analysis showed that this complex representation yields more efficient coding. Overall, our results suggest that during dynamic natural behaviour, hippocampal neurons can rapidly switch their core computation to represent the relevant behavioural variables, supporting behavioural flexibility.


Subject(s)
Chiroptera , Echolocation , Flight, Animal , Hippocampus , Animals , CA1 Region, Hippocampal/cytology , CA1 Region, Hippocampal/physiology , Chiroptera/physiology , Echolocation/physiology , Flight, Animal/physiology , Hippocampus/cytology , Hippocampus/physiology , Neurons/physiology , Orientation, Spatial , Spatial Navigation , Spatial Processing
7.
Proc Natl Acad Sci U S A ; 119(5)2022 02 01.
Article in English | MEDLINE | ID: mdl-35091473

ABSTRACT

A hallmark of complex sensory systems is the organization of neurons into functionally meaningful maps, which allow for comparison and contrast of parallel inputs via lateral inhibition. However, it is unclear whether such a map exists in olfaction. Here, we address this question by determining the organizing principle underlying the stereotyped pairing of olfactory receptor neurons (ORNs) in Drosophila sensory hairs, wherein compartmentalized neurons inhibit each other via ephaptic coupling. Systematic behavioral assays reveal that most paired ORNs antagonistically regulate the same type of behavior. Such valence opponency is relevant in critical behavioral contexts including place preference, egg laying, and courtship. Odor-mixture experiments show that ephaptic inhibition provides a peripheral means for evaluating and shaping countervailing cues relayed to higher brain centers. Furthermore, computational modeling suggests that this organization likely contributes to processing ratio information in odor mixtures. This olfactory valence map may have evolved to swiftly process ethologically meaningful odor blends without involving costly synaptic computation.


Subject(s)
Olfactory Perception/physiology , Olfactory Receptor Neurons/physiology , Animals , Connectome , Drosophila Proteins/metabolism , Drosophila melanogaster/metabolism , Odorants , Olfactory Pathways/physiology , Olfactory Receptor Neurons/metabolism , Sense Organs/physiology , Smell/physiology
8.
Nature ; 596(7872): 404-409, 2021 08.
Article in English | MEDLINE | ID: mdl-34381211

ABSTRACT

As animals navigate on a two-dimensional surface, neurons in the medial entorhinal cortex (MEC) known as grid cells are activated when the animal passes through multiple locations (firing fields) arranged in a hexagonal lattice that tiles the locomotion surface1. However, although our world is three-dimensional, it is unclear how the MEC represents 3D space2. Here we recorded from MEC cells in freely flying bats and identified several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing fields. Many of these multifield neurons were 3D grid cells, whose neighbouring fields were separated by a characteristic distance-forming a local order-but lacked any global lattice arrangement of the fields. Thus, whereas 2D grid cells form a global lattice-characterized by both local and global order-3D grid cells exhibited only local order, creating a locally ordered metric for space. We modelled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D, thereby describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.


Subject(s)
Chiroptera/physiology , Depth Perception/physiology , Entorhinal Cortex/cytology , Entorhinal Cortex/physiology , Grid Cells/physiology , Models, Neurological , Animals , Behavior, Animal/physiology , Flight, Animal/physiology , Male
9.
Curr Biol ; 31(18): 4111-4119.e4, 2021 09 27.
Article in English | MEDLINE | ID: mdl-34302743

ABSTRACT

In their pioneering study on dopamine release, Romo and Schultz speculated "...that the amount of dopamine released by unmodulated spontaneous impulse activity exerts a tonic, permissive influence on neuronal processes more actively engaged in preparation of self-initiated movements...."1 Motivated by the suggestion of "spontaneous impulses," as well as by the "ramp up" of dopaminergic neuronal activity that occurs when rodents navigate to a reward,2-5 we asked two questions. First, are there spontaneous impulses of dopamine that are released in cortex? Using cell-based optical sensors of extrasynaptic dopamine, [DA]ex,6 we found that spontaneous dopamine impulses in cortex of naive mice occur at a rate of ∼0.01 per second. Next, can mice be trained to change the amplitude and/or timing of dopamine events triggered by internal brain dynamics, much as they can change the amplitude and timing of dopamine impulses based on an external cue?7-9 Using a reinforcement learning paradigm based solely on rewards that were gated by feedback from real-time measurements of [DA]ex, we found that mice can volitionally modulate their spontaneous [DA]ex. In particular, by only the second session of daily, hour-long training, mice increased the rate of impulses of [DA]ex, increased the amplitude of the impulses, and increased their tonic level of [DA]ex for a reward. Critically, mice learned to reliably elicit [DA]ex impulses prior to receiving a reward. These effects reversed when the reward was removed. We posit that spontaneous dopamine impulses may serve as a salient cognitive event in behavioral planning.


Subject(s)
Dopamine , Reward , Animals , Dopamine/physiology , Dopaminergic Neurons/physiology , Learning/physiology , Mice , Reinforcement, Psychology
10.
Curr Opin Neurobiol ; 70: 24-33, 2021 10.
Article in English | MEDLINE | ID: mdl-34175521

ABSTRACT

The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.


Subject(s)
Nerve Net , Neural Networks, Computer , Information Storage and Retrieval , Models, Neurological , Nerve Net/physiology , Neuronal Plasticity/physiology , Synapses
11.
Science ; 372(6545)2021 05 28.
Article in English | MEDLINE | ID: mdl-34045327

ABSTRACT

Hippocampal place cells encode the animal's location. Place cells were traditionally studied in small environments, and nothing is known about large ethologically relevant spatial scales. We wirelessly recorded from hippocampal dorsal CA1 neurons of wild-born bats flying in a long tunnel (200 meters). The size of place fields ranged from 0.6 to 32 meters. Individual place cells exhibited multiple fields and a multiscale representation: Place fields of the same neuron differed up to 20-fold in size. This multiscale coding was observed from the first day of exposure to the environment, and also in laboratory-born bats that never experienced large environments. Theoretical decoding analysis showed that the multiscale code allows representation of very large environments with much higher precision than that of other codes. Together, by increasing the spatial scale, we discovered a neural code that is radically different from classical place codes.


Subject(s)
CA1 Region, Hippocampal/physiology , Chiroptera/physiology , Flight, Animal , Place Cells/physiology , Pyramidal Cells/physiology , Spatial Navigation , Animals , CA3 Region, Hippocampal/physiology , Entorhinal Cortex/physiology , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology
12.
Proc Natl Acad Sci U S A ; 117(52): 33639-33648, 2020 12 29.
Article in English | MEDLINE | ID: mdl-33328274

ABSTRACT

Spike-timing-dependent plasticity (STDP) is considered as a primary mechanism underlying formation of new memories during learning. Despite the growing interest in activity-dependent plasticity, it is still unclear whether synaptic plasticity rules inferred from in vitro experiments are correct in physiological conditions. The abnormally high calcium concentration used in in vitro studies of STDP suggests that in vivo plasticity rules may differ significantly from in vitro experiments, especially since STDP depends strongly on calcium for induction. We therefore studied here the influence of extracellular calcium on synaptic plasticity. Using a combination of experimental (patch-clamp recording and Ca2+ imaging at CA3-CA1 synapses) and theoretical approaches, we show here that the classic STDP rule in which pairs of single pre- and postsynaptic action potentials induce synaptic modifications is not valid in the physiological Ca2+ range. Rather, we found that these pairs of single stimuli are unable to induce any synaptic modification in 1.3 and 1.5 mM calcium and lead to depression in 1.8 mM. Plasticity can only be recovered when bursts of postsynaptic spikes are used, or when neurons fire at sufficiently high frequency. In conclusion, the STDP rule is profoundly altered in physiological Ca2+, but specific activity regimes restore a classical STDP profile.


Subject(s)
Calcium/metabolism , Neuronal Plasticity/physiology , Action Potentials/physiology , Animals , Long-Term Potentiation , Models, Neurological , Nonlinear Dynamics , Rats, Wistar , Time Factors
13.
Elife ; 72018 11 12.
Article in English | MEDLINE | ID: mdl-30418871

ABSTRACT

The cerebellum aids the learning of fast, coordinated movements. According to current consensus, erroneously active parallel fibre synapses are depressed by complex spikes signalling movement errors. However, this theory cannot solve the credit assignment problem of processing a global movement evaluation into multiple cell-specific error signals. We identify a possible implementation of an algorithm solving this problem, whereby spontaneous complex spikes perturb ongoing movements, create eligibility traces and signal error changes guiding plasticity. Error changes are extracted by adaptively cancelling the average error. This framework, stochastic gradient descent with estimated global errors (SGDEGE), predicts synaptic plasticity rules that apparently contradict the current consensus but were supported by plasticity experiments in slices from mice under conditions designed to be physiological, highlighting the sensitivity of plasticity studies to experimental conditions. We analyse the algorithm's convergence and capacity. Finally, we suggest SGDEGE may also operate in the basal ganglia.


Subject(s)
Cerebellum/physiology , Learning , Action Potentials/physiology , Algorithms , Animals , Computer Simulation , Female , Long-Term Potentiation , Mice, Inbred C57BL , Neural Networks, Computer , Neuronal Plasticity/physiology , Purkinje Cells/physiology , Time Factors
14.
Nat Commun ; 9(1): 3590, 2018 09 04.
Article in English | MEDLINE | ID: mdl-30181554

ABSTRACT

Ethologically relevant stimuli are often multidimensional. In many brain systems, neurons with "pure" tuning to one stimulus dimension are found along with "conjunctive" neurons that encode several dimensions, forming an apparently redundant representation. Here we show using theoretical analysis that a mixed-dimensionality code can efficiently represent a stimulus in different behavioral regimes: encoding by conjunctive cells is more robust when the stimulus changes quickly, whereas on long timescales pure cells represent the stimulus more efficiently with fewer neurons. We tested our predictions experimentally in the bat head-direction system and found that many head-direction cells switched their tuning dynamically from pure to conjunctive representation as a function of angular velocity-confirming our theoretical prediction. More broadly, our results suggest that optimal dimensionality depends on population size and on the time available for decoding-which might explain why mixed-dimensionality representations are common in sensory, motor, and higher cognitive systems across species.


Subject(s)
Brain/physiology , Chiroptera/physiology , Neurons/physiology , Animals , Brain/cytology , Head/physiology , Models, Neurological , Orientation/physiology , Time Factors
15.
Neuron ; 91(2): 221-59, 2016 07 20.
Article in English | MEDLINE | ID: mdl-27477016

ABSTRACT

As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.


Subject(s)
Action Potentials/physiology , Algorithms , Brain/physiology , Models, Neurological , Neurons/physiology , Animals , Humans , Time Factors
16.
Phys Rev E ; 93(2): 022302, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26986347

ABSTRACT

Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.


Subject(s)
Neural Networks, Computer , Ecological and Environmental Phenomena , Neurons/cytology , Stochastic Processes , Synapses
17.
Nat Commun ; 6: 7842, 2015 Jul 22.
Article in English | MEDLINE | ID: mdl-26198207

ABSTRACT

The stability of ecological systems has been a long-standing focus of ecology. Recently, tools from random matrix theory have identified the main drivers of stability in ecological communities whose network structure is random. However, empirical food webs differ greatly from random graphs. For example, their degree distribution is broader, they contain few trophic cycles, and they are almost interval. Here we derive an approximation for the stability of food webs whose structure is generated by the cascade model, in which 'larger' species consume 'smaller' ones. We predict the stability of these food webs with great accuracy, and our approximation also works well for food webs whose structure is determined empirically or by the niche model. We find that intervality and broad degree distributions tend to stabilize food webs, and that average interaction strength has little influence on stability, compared with the effect of variance and correlation.


Subject(s)
Food Chain , Models, Biological , Animals
18.
Phys Rev Lett ; 114(8): 088101, 2015 Feb 27.
Article in English | MEDLINE | ID: mdl-25768781

ABSTRACT

In neural circuits, statistical connectivity rules strongly depend on cell-type identity. We study dynamics of neural networks with cell-type-specific connectivity by extending the dynamic mean-field method and find that these networks exhibit a phase transition between silent and chaotic activity. By analyzing the locus of this transition, we derive a new result in random matrix theory: the spectral radius of a random connectivity matrix with block-structured variances. We apply our results to show how a small group of hyperexcitable neurons within the network can significantly increase the network's computational capacity by bringing it into the chaotic regime.


Subject(s)
Models, Neurological , Neural Pathways/physiology , Neurons/physiology , Chromosome Pairing/physiology , Nerve Net/physiology , Nonlinear Dynamics
19.
PLoS Comput Biol ; 9(9): e1003206, 2013.
Article in English | MEDLINE | ID: mdl-24039563

ABSTRACT

Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with [Formula: see text] Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons.


Subject(s)
Action Potentials , Models, Biological , Neurons/physiology
SELECTION OF CITATIONS
SEARCH DETAIL