RESUMEN
The relationship between neuroscience and artificial intelligence (AI) has evolved rapidly over the past decade. These two areas of study influence and stimulate each other. We invited experts to share their perspectives on this exciting intersection, focusing on current achievements, unsolved questions, and future directions.
Asunto(s)
Inteligencia Artificial , Neurociencias , Humanos , AnimalesRESUMEN
The medial entorhinal cortex (MEC) hosts many of the brain's circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience1. Whereas location is known to be encoded by spatially tuned cell types in this brain region2,3, little is known about how the activity of entorhinal cells is tied together over time at behaviourally relevant time scales, in the second-to-minute regime. Here we show that MEC neuronal activity has the capacity to be organized into ultraslow oscillations, with periods ranging from tens of seconds to minutes. During these oscillations, the activity is further organized into periodic sequences. Oscillatory sequences manifested while mice ran at free pace on a rotating wheel in darkness, with no change in location or running direction and no scheduled rewards. The sequences involved nearly the entire cell population, and transcended epochs of immobility. Similar sequences were not observed in neighbouring parasubiculum or in visual cortex. Ultraslow oscillatory sequences in MEC may have the potential to couple neurons and circuits across extended time scales and serve as a template for new sequence formation during navigation and episodic memory formation.
Asunto(s)
Corteza Entorrinal , Neuronas , Periodicidad , Animales , Ratones , Corteza Entorrinal/citología , Corteza Entorrinal/fisiología , Neuronas/fisiología , Giro Parahipocampal/fisiología , Carrera/fisiología , Factores de Tiempo , Oscuridad , Corteza Visual/fisiología , Vías Nerviosas , Navegación Espacial/fisiología , Memoria EpisódicaRESUMEN
Local circuit architecture facilitates the emergence of feature selectivity in the cerebral cortex1. In the hippocampus, it remains unknown whether local computations supported by specific connectivity motifs2 regulate the spatial receptive fields of pyramidal cells3. Here we developed an in vivo electroporation method for monosynaptic retrograde tracing4 and optogenetics manipulation at single-cell resolution to interrogate the dynamic interaction of place cells with their microcircuitry during navigation. We found a local circuit mechanism in CA1 whereby the spatial tuning of an individual place cell can propagate to a functionally recurrent subnetwork5 to which it belongs. The emergence of place fields in individual neurons led to the development of inverse selectivity in a subset of their presynaptic interneurons, and recruited functionally coupled place cells at that location. Thus, the spatial selectivity of single CA1 neurons is amplified through local circuit plasticity to enable effective multi-neuronal representations that can flexibly scale environmental features locally without degrading the feedforward input structure.
Asunto(s)
Hipocampo/citología , Hipocampo/fisiología , Vías Nerviosas , Memoria Espacial/fisiología , Navegación Espacial/fisiología , Animales , Región CA1 Hipocampal/citología , Región CA1 Hipocampal/fisiología , Linaje de la Célula , Electroporación , Femenino , Interneuronas/fisiología , Masculino , Ratones , Inhibición Neural , Optogenética , Células de Lugar/fisiología , Terminales Presinápticos/metabolismo , Células Piramidales/fisiología , Análisis de la Célula IndividualRESUMEN
Inhibitory interneurons are pivotal components of cortical circuits. Beyond providing inhibition, they have been proposed to coordinate the firing of excitatory neurons within cell assemblies. While the roles of specific interneuron subtypes have been extensively studied, their influence on pyramidal cell synchrony in vivo remains elusive. Employing an all-optical approach in mice, we simultaneously recorded hippocampal interneurons and pyramidal cells and probed the network influence of individual interneurons using optogenetics. We demonstrate that CA1 interneurons form a functionally interconnected network that promotes synchrony through disinhibition during awake immobility, while preserving endogenous cell assemblies. Our network model underscores the importance of both cell assemblies and dense, unspecific interneuron connectivity in explaining our experimental findings, suggesting that interneurons may operate not only via division of labor but also through concerted activity.
Asunto(s)
Hipocampo , Interneuronas , Optogenética , Células Piramidales , Animales , Interneuronas/fisiología , Células Piramidales/fisiología , Ratones , Hipocampo/fisiología , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Región CA1 Hipocampal/fisiología , Región CA1 Hipocampal/citología , Potenciales de Acción/fisiología , Masculino , Ratones Endogámicos C57BLRESUMEN
Theoretical models conventionally portray the consolidation of memories as a slow process that unfolds during sleep. According to the classical Complementary Learning Systems theory, the hippocampus (HPC) rapidly changes its connectivity during wakefulness to encode ongoing events and create memory ensembles that are later transferred to the prefrontal cortex (PFC) during sleep. However, recent experimental studies challenge this notion by showing that new information consistent with prior knowledge can be rapidly consolidated in PFC during wakefulness and that PFC lesions disrupt the encoding of congruent events in the HPC. The contributions of the PFC to memory encoding have therefore largely been overlooked. Moreover, most theoretical frameworks assume random and uncorrelated patterns representing memories, disregarding the correlations between our experiences. To address these shortcomings, we developed a HPC-PFC network model that simulates interactions between the HPC and PFC during the encoding of a memory (awake stage), and subsequent consolidation (sleeping stage) to examine the contributions of each region to the consolidation of novel and congruent memories. Our results show that the PFC network uses stored memory "schemas" consolidated during previous experiences to identify inputs that evoke congruent patterns of activity, quickly integrate it into its network, and gate which components are encoded in the HPC. More specifically, the PFC uses GABAergic long-range projections to inhibit HPC neurons representing input components correlated with a previously stored memory "schema," eliciting sparse hippocampal activity during exposure to congruent events, as it has been experimentally observed.
Asunto(s)
Hipocampo , Memoria , Corteza Prefrontal , Sueño , Corteza Prefrontal/fisiología , Hipocampo/fisiología , Memoria/fisiología , Humanos , Sueño/fisiología , Vigilia/fisiología , Modelos Neurológicos , Consolidación de la Memoria/fisiología , AnimalesRESUMEN
Neuronal networks with strong recurrent connectivity provide the brain with a powerful means to perform complex computational tasks. However, high-gain excitatory networks are susceptible to instability, which can lead to runaway activity, as manifested in pathological regimes such as epilepsy. Inhibitory stabilization offers a dynamic, fast and flexible compensatory mechanism to balance otherwise unstable networks, thus enabling the brain to operate in its most efficient regimes. Here we review recent experimental evidence for the presence of such inhibition-stabilized dynamics in the brain and discuss their consequences for cortical computation. We show how the study of inhibition-stabilized networks in the brain has been facilitated by recent advances in the technological toolbox and perturbative techniques, as well as a concomitant development of biologically realistic computational models. By outlining future avenues, we suggest that inhibitory stabilization can offer an exemplary case of how experimental neuroscience can progress in tandem with technology and theory to advance our understanding of the brain.
Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Neuronas/fisiología , Potenciales de Acción/fisiología , Animales , Simulación por Computador , HumanosRESUMEN
Memories are thought to be stored in neural ensembles known as engrams that are specifically reactivated during memory recall. Recent studies have found that memory engrams of two events that happened close in time tend to overlap in the hippocampus and the amygdala, and these overlaps have been shown to support memory linking. It has been hypothesized that engram overlaps arise from the mechanisms that regulate memory allocation itself, involving neural excitability, but the exact process remains unclear. Indeed, most theoretical studies focus on synaptic plasticity and little is known about the role of intrinsic plasticity, which could be mediated by neural excitability and serve as a complementary mechanism for forming memory engrams. Here, we developed a rate-based recurrent neural network that includes both synaptic plasticity and neural excitability. We obtained structural and functional overlap of memory engrams for contexts that are presented close in time, consistent with experimental and computational studies. We then investigated the role of excitability in memory allocation at the network level and unveiled competitive mechanisms driven by inhibition. This work suggests mechanisms underlying the role of intrinsic excitability in memory allocation and linking, and yields predictions regarding the formation and the overlap of memory engrams.
Asunto(s)
Memoria , Plasticidad Neuronal , Humanos , Memoria/fisiología , Plasticidad Neuronal/fisiología , Modelos Neurológicos , Neuronas/fisiología , Red Nerviosa/fisiología , Animales , Redes Neurales de la Computación , Hipocampo/fisiologíaRESUMEN
Filopodia are thin synaptic protrusions that have been long known to play an important role in early development. Recently, they have been found to be more abundant in the adult cortex than previously thought, and more plastic than spines (button-shaped mature synapses). Inspired by these findings, we introduce a new model of synaptic plasticity that jointly describes learning of filopodia and spines. The model assumes that filopodia exhibit strongly competitive learning dynamics -similarly to additive spike-timing-dependent plasticity (STDP). At the same time it proposes that, if filopodia undergo sufficient potentiation, they consolidate into spines. Spines follow weakly competitive learning, classically associated with multiplicative, soft-bounded models of STDP. This makes spines more stable and sensitive to the fine structure of input correlations. We show that our learning rule has a selectivity comparable to additive STDP and captures input correlations as well as multiplicative models of STDP. We also show how it can protect previously formed memories and perform synaptic consolidation. Overall, our results can be seen as a phenomenological description of how filopodia and spines could cooperate to overcome the individual difficulties faced by strong and weak competition mechanisms.
Asunto(s)
Espinas Dendríticas , Aprendizaje , Modelos Neurológicos , Plasticidad Neuronal , Seudópodos , Seudópodos/fisiología , Plasticidad Neuronal/fisiología , Espinas Dendríticas/fisiología , Aprendizaje/fisiología , Animales , Humanos , Biología Computacional , Sinapsis/fisiología , Neuronas/fisiología , Potenciales de Acción/fisiologíaRESUMEN
SignificanceAn influential idea in neuroscience is that neural circuits do not only passively process sensory information but rather actively compare them with predictions thereof. A core element of this comparison is prediction-error neurons, the activity of which only changes upon mismatches between actual and predicted sensory stimuli. While it has been shown that these prediction-error neurons come in different variants, it is largely unresolved how they are simultaneously formed and shaped by highly interconnected neural networks. By using a computational model, we study the circuit-level mechanisms that give rise to different variants of prediction-error neurons. Our results shed light on the formation, refinement, and robustness of prediction-error circuits, an important step toward a better understanding of predictive processing.
Asunto(s)
Redes Neurales de la Computación , Neuronas , Neuronas/fisiologíaRESUMEN
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Asunto(s)
Modelos Neurológicos , Corteza Visual , Neuronas/fisiología , Corteza Visual/fisiología , Plasticidad Neuronal/fisiología , Sinapsis/fisiologíaRESUMEN
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.
Asunto(s)
Encéfalo/fisiología , Memoria/fisiología , Neuronas/fisiología , Teoría de la Información , Modelos Biológicos , Factores de TiempoRESUMEN
To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Asunto(s)
Conectoma , Modelos Neurológicos , Neuronas/fisiología , Corteza Visual/fisiología , Animales , Humanos , PercepciónRESUMEN
BACKGROUND: Stress-induced mental illnesses (mediated by neuroinflammation) pose one of the world's most urgent public health challenges. A reliable in vivo chemical biomarker of stress would significantly improve the clinical communities' diagnostic and therapeutic approaches to illnesses, such as depression. METHODS: Male and female C57BL/6J mice underwent a chronic stress paradigm. We paired innovative in vivo serotonin and histamine voltammetric measurement technologies, behavioral testing, and cutting-edge mathematical methods to correlate chemistry to stress and behavior. RESULTS: Inflammation-induced increases in hypothalamic histamine were co-measured with decreased in vivo extracellular hippocampal serotonin in mice that underwent a chronic stress paradigm, regardless of behavioral phenotype. In animals with depression phenotypes, correlations were found between serotonin and the extent of behavioral indices of depression. We created a high accuracy algorithm that could predict whether animals had been exposed to stress or not based solely on the serotonin measurement. We next developed a model of serotonin and histamine modulation, which predicted that stress-induced neuroinflammation increases histaminergic activity, serving to inhibit serotonin. Finally, we created a mathematical index of stress, Si and predicted that during chronic stress, where Si is high, simultaneously increasing serotonin and decreasing histamine is the most effective chemical strategy to restoring serotonin to pre-stress levels. When we pursued this idea pharmacologically, our experiments were nearly identical to the model's predictions. CONCLUSIONS: This work shines the light on two biomarkers of chronic stress, histamine and serotonin, and implies that both may be important in our future investigations of the pathology and treatment of inflammation-induced depression.
Asunto(s)
Histamina , Serotonina , Animales , Biomarcadores , Femenino , Inflamación , Masculino , Ratones , Ratones Endogámicos C57BLRESUMEN
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Asunto(s)
Aprendizaje , Modelos Neurológicos , Corteza Motora/fisiología , Adaptación Fisiológica , Animales , Conducta Animal , Interfaces Cerebro-Computador , Simulación por Computador , Retroalimentación , Haplorrinos , Motivación , Plasticidad Neuronal , Neuronas , Distribución Normal , Análisis de Componente PrincipalRESUMEN
Cell assemblies are thought to be the substrate of memory in the brain. Theoretical studies have previously shown that assemblies can be formed in networks with multiple types of plasticity. But how exactly they are formed and how they encode information is yet to be fully understood. One possibility is that memories are stored in silent assemblies. Here we used a computational model to study the formation of silent assemblies in a network of spiking neurons with excitatory and inhibitory plasticity. We found that even though the formed assemblies were silent in terms of mean firing rate, they had an increased coefficient of variation of inter-spike intervals. We also found that this spiking irregularity could be read out with support of short-term plasticity, and that it could contribute to the longevity of memories.
Asunto(s)
Potenciales de Acción , Memoria , Animales , Modelos Neurológicos , Inhibición Neural/fisiologíaRESUMEN
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Asunto(s)
Potenciales de Acción/fisiología , Aprendizaje/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Encéfalo/fisiología , Biología ComputacionalRESUMEN
To survive, animals have to quickly modify their behaviour when the reward changes. The internal representations responsible for this are updated through synaptic weight changes, mediated by certain neuromodulators conveying feedback from the environment. In previous experiments, we discovered a form of hippocampal Spike-Timing-Dependent-Plasticity (STDP) that is sequentially modulated by acetylcholine and dopamine. Acetylcholine facilitates synaptic depression, while dopamine retroactively converts the depression into potentiation. When these experimental findings were implemented as a learning rule in a computational model, our simulations showed that cholinergic-facilitated depression is important for reversal learning. In the present study, we tested the model's prediction by optogenetically inactivating cholinergic neurons in mice during a hippocampus-dependent spatial learning task with changing rewards. We found that reversal learning, but not initial place learning, was impaired, verifying our computational prediction that acetylcholine-modulated plasticity promotes the unlearning of old reward locations. Further, differences in neuromodulator concentrations in the model captured mouse-by-mouse performance variability in the optogenetic experiments. Our line of work sheds light on how neuromodulators enable the learning of new contingencies.
Asunto(s)
Conducta Animal , Aprendizaje/fisiología , Plasticidad Neuronal/fisiología , Transmisión Sináptica/fisiología , Animales , Neuronas Colinérgicas/fisiología , Potenciación a Largo Plazo/fisiología , Ratones , Modelos Neurológicos , Neurotransmisores/fisiología , RecompensaRESUMEN
In the hippocampus, episodic memories are thought to be encoded by the formation of ensembles of synaptically coupled CA3 pyramidal cells driven by sparse but powerful mossy fiber inputs from dentate gyrus granule cells. The neuromodulators acetylcholine and noradrenaline are separately proposed as saliency signals that dictate memory encoding but it is not known if they represent distinct signals with separate mechanisms. Here, we show experimentally that acetylcholine, and to a lesser extent noradrenaline, suppress feed-forward inhibition and enhance Excitatory-Inhibitory ratio in the mossy fiber pathway but CA3 recurrent network properties are only altered by acetylcholine. We explore the implications of these findings on CA3 ensemble formation using a hierarchy of models. In reconstructions of CA3 pyramidal cells, mossy fiber pathway disinhibition facilitates postsynaptic dendritic depolarization known to be required for synaptic plasticity at CA3-CA3 recurrent synapses. We further show in a spiking neural network model of CA3 how acetylcholine-specific network alterations can drive rapid overlapping ensemble formation. Thus, through these distinct sets of mechanisms, acetylcholine and noradrenaline facilitate the formation of neuronal ensembles in CA3 that encode salient episodic memories in the hippocampus but acetylcholine selectively enhances the density of memory storage.
Asunto(s)
Acetilcolina/farmacología , Región CA3 Hipocampal , Memoria , Norepinefrina/farmacología , Animales , Región CA3 Hipocampal/citología , Región CA3 Hipocampal/efectos de los fármacos , Región CA3 Hipocampal/fisiología , Biología Computacional , Memoria/efectos de los fármacos , Memoria/fisiología , Ratones , Ratones Endogámicos C57BL , Modelos Neurológicos , Plasticidad Neuronal/efectos de los fármacos , Neuronas/efectos de los fármacos , Células Piramidales/efectos de los fármacosRESUMEN
During the exploration of novel environments, place fields are rapidly formed in hippocampal CA1 neurons. Place cell firing rate increases in early stages of exploration of novel environments but returns to baseline levels in familiar environments. Although similar in amplitude and width, place fields in familiar environments are more stable than in novel environments. We propose a computational model of the hippocampal CA1 network, which describes the formation, dynamics and stabilization of place fields. We show that although somatic disinhibition is sufficient to form place fields, dendritic inhibition along with synaptic plasticity is necessary for place field stabilization. Our model suggests that place cell stability can be attributed to strong excitatory synaptic weights and strong dendritic inhibition. We show that the interplay between somatic and dendritic inhibition balances the increased excitatory weights, such that place cells return to their baseline firing rate after exploration. Our model suggests that different types of interneurons are essential to unravel the mechanisms underlying place field plasticity. Finally, we predict that artificially induced dendritic events can shift place fields even after place field stabilization.
Asunto(s)
Región CA1 Hipocampal , Dendritas/fisiología , Inhibición Neural/fisiología , Células de Lugar/fisiología , Potenciales de Acción/fisiología , Animales , Región CA1 Hipocampal/citología , Región CA1 Hipocampal/fisiología , Biología Computacional , Ratones , Modelos Neurológicos , Plasticidad Neuronal/fisiologíaRESUMEN
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.