Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Cell Rep Methods ; 4(1): 100681, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38183979

RESUMO

Neuroscience is moving toward a more integrative discipline where understanding brain function requires consolidating the accumulated evidence seen across experiments, species, and measurement techniques. A remaining challenge on that path is integrating such heterogeneous data into analysis workflows such that consistent and comparable conclusions can be distilled as an experimental basis for models and theories. Here, we propose a solution in the context of slow-wave activity (<1 Hz), which occurs during unconscious brain states like sleep and general anesthesia and is observed across diverse experimental approaches. We address the issue of integrating and comparing heterogeneous data by conceptualizing a general pipeline design that is adaptable to a variety of inputs and applications. Furthermore, we present the Collaborative Brain Wave Analysis Pipeline (Cobrawap) as a concrete, reusable software implementation to perform broad, detailed, and rigorous comparisons of slow-wave characteristics across multiple, openly available electrocorticography (ECoG) and calcium imaging datasets.


Assuntos
Ondas Encefálicas , Software , Encéfalo , Sono , Mapeamento Encefálico/métodos
2.
Proc Natl Acad Sci U S A ; 120(49): e2220743120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38019856

RESUMO

The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.


Assuntos
Inteligência Artificial , Neurônios , Neurônios/fisiologia , Encéfalo/fisiologia , Aprendizado de Máquina , Modelos Neurológicos , Potenciais de Ação/fisiologia
3.
Commun Biol ; 6(1): 266, 2023 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-36914748

RESUMO

The development of novel techniques to record wide-field brain activity enables estimation of data-driven models from thousands of recording channels and hence across large regions of cortex. These in turn improve our understanding of the modulation of brain states and the richness of traveling waves dynamics. Here, we infer data-driven models from high-resolution in-vivo recordings of mouse brain obtained from wide-field calcium imaging. We then assimilate experimental and simulated data through the characterization of the spatio-temporal features of cortical waves in experimental recordings. Inference is built in two steps: an inner loop that optimizes a mean-field model by likelihood maximization, and an outer loop that optimizes a periodic neuro-modulation via direct comparison of observables that characterize cortical slow waves. The model reproduces most of the features of the non-stationary and non-linear dynamics present in the high-resolution in-vivo recordings of the mouse brain. The proposed approach offers new methods of characterizing and understanding cortical waves for experimental and computational neuroscientists.


Assuntos
Ondas Encefálicas , Eletroencefalografia , Animais , Camundongos , Eletroencefalografia/métodos , Encéfalo , Modelos Neurológicos , Simulação por Computador
4.
PLoS Comput Biol ; 18(6): e1010221, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35727852

RESUMO

The field of recurrent neural networks is over-populated by a variety of proposed learning rules and protocols. The scope of this work is to define a generalized framework, to move a step forward towards the unification of this fragmented scenario. In the field of supervised learning, two opposite approaches stand out, error-based and target-based. This duality gave rise to a scientific debate on which learning framework is the most likely to be implemented in biological networks of neurons. Moreover, the existence of spikes raises the question of whether the coding of information is rate-based or spike-based. To face these questions, we proposed a learning model with two main parameters, the rank of the feedback learning matrix [Formula: see text] and the tolerance to spike timing τ⋆. We demonstrate that a low (high) rank [Formula: see text] accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing promotes rate-based (spike-based) coding. We show that in a store and recall task, high-ranks allow for lower MSE values, while low-ranks enable a faster convergence. Our framework naturally lends itself to Behavioral Cloning and allows for efficiently solving relevant closed-loop tasks, investigating what parameters [Formula: see text] are optimal to solve a specific task. We found that a high [Formula: see text] is essential for tasks that require retaining memory for a long time (Button and Food). On the other hand, this is not relevant for a motor task (the 2D Bipedal Walker). In this case, we find that precise spike-based coding enables optimal performances. Finally, we show that our theoretical formulation allows for defining protocols to estimate the rank of the feedback error in biological networks. We release a PyTorch implementation of our model supporting GPU parallelization.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Potenciais de Ação/fisiologia , Neurônios/fisiologia
5.
PLoS Comput Biol ; 17(6): e1009045, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34181642

RESUMO

The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories.


Assuntos
Potenciais de Ação , Córtex Cerebral/fisiologia , Modelos Neurológicos , Sono REM/fisiologia , Tálamo/fisiologia , Algoritmos , Córtex Cerebral/citologia , Homeostase , Humanos , Aprendizagem , Neurônios/fisiologia , Sinapses/fisiologia , Tálamo/citologia
6.
Cell Rep ; 35(12): 109270, 2021 06 22.
Artigo em Inglês | MEDLINE | ID: mdl-34161772

RESUMO

Slow oscillations (≲ 1 Hz), a hallmark of slow-wave sleep and deep anesthesia across species, arise from spatiotemporal patterns of activity whose complexity increases as wakefulness is approached and cognitive functions emerge. The arousal process constitutes an open window to the unknown mechanisms underlying the emergence of such dynamical richness in awake cortical networks. Here, we investigate the changes in network dynamics as anesthesia fades out in the rat visual cortex. Starting from deep anesthesia, slow oscillations gradually increase their frequency, eventually expressing maximum regularity. This stage is followed by the abrupt onset of an infra-slow (~0.2 Hz) alternation between sleep-like oscillations and activated states. A population rate model reproduces this transition driven by an increased excitability that brings it to periodically cross a critical point. Based on our model, dynamical richness emerges as a competition between two metastable attractor states, a conclusion strongly supported by the data.


Assuntos
Anestesia , Córtex Cerebral/fisiologia , Vigília/fisiologia , Animais , Nível de Alerta/fisiologia , Simulação por Computador , Masculino , Modelos Neurológicos , Neurônios , Ratos , Ratos Wistar
7.
PLoS One ; 16(2): e0247014, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33592040

RESUMO

Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Rede Nervosa/citologia , Rede Nervosa/fisiologia , Neurônios/citologia , Potenciais de Ação
8.
Cereb Cortex ; 30(6): 3451-3466, 2020 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-31989160

RESUMO

Sleep slow waves are known to participate in memory consolidation, yet slow waves occurring under anesthesia present no positive effects on memory. Here, we shed light onto this paradox, based on a combination of extracellular recordings in vivo, in vitro, and computational models. We find two types of slow waves, based on analyzing the temporal patterns of successive slow-wave events. The first type is consistently observed in natural slow-wave sleep, while the second is shown to be ubiquitous under anesthesia. Network models of spiking neurons predict that the two slow wave types emerge due to a different gain on inhibitory versus excitatory cells and that different levels of spike-frequency adaptation in excitatory cells can account for dynamical distinctions between the two types. This prediction was tested in vitro by varying adaptation strength using an agonist of acetylcholine receptors, which demonstrated a neuromodulatory switch between the two types of slow waves. Finally, we show that the first type of slow-wave dynamics is more sensitive to external stimuli, which can explain how slow waves in sleep and anesthesia differentially affect memory consolidation, as well as provide a link between slow-wave dynamics and memory diseases.


Assuntos
Córtex Cerebral/fisiologia , Neurônios/fisiologia , Receptores Colinérgicos/fisiologia , Sono de Ondas Lentas/fisiologia , Anestesia Geral , Anestésicos Dissociativos/farmacologia , Anestésicos Intravenosos/farmacologia , Animais , Ondas Encefálicas/efeitos dos fármacos , Ondas Encefálicas/fisiologia , Gatos , Córtex Cerebral/efeitos dos fármacos , Agonistas Colinérgicos/farmacologia , Simulação por Computador , Córtex Entorrinal/efeitos dos fármacos , Córtex Entorrinal/fisiologia , Humanos , Técnicas In Vitro , Ketamina/farmacologia , Macaca , Consolidação da Memória , Camundongos , Córtex Motor/efeitos dos fármacos , Córtex Motor/fisiologia , Inibição Neural , Neurônios/efeitos dos fármacos , Lobo Parietal/efeitos dos fármacos , Lobo Parietal/fisiologia , Córtex Pré-Frontal/efeitos dos fármacos , Córtex Pré-Frontal/fisiologia , Córtex Visual Primário/efeitos dos fármacos , Córtex Visual Primário/fisiologia , Ratos , Receptores Colinérgicos/efeitos dos fármacos , Sono de Ondas Lentas/efeitos dos fármacos , Sufentanil/farmacologia , Lobo Temporal/efeitos dos fármacos , Lobo Temporal/fisiologia
9.
Front Syst Neurosci ; 13: 33, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31396058

RESUMO

Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192 × 192 modules, each composed of 1,250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40 GHz clock rate. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1,024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3 × 109 and 4.1 × 109 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.

10.
Sci Rep ; 9(1): 8990, 2019 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-31222151

RESUMO

The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.


Assuntos
Córtex Cerebral/fisiologia , Homeostase , Memória , Reconhecimento Visual de Modelos , Sono/fisiologia , Sinapses/fisiologia , Tálamo/fisiologia , Algoritmos , Humanos , Modelos Biológicos , Neurônios , Estimulação Luminosa
11.
Neural Comput ; 31(4): 653-680, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30764741

RESUMO

Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. The mean-field model is capable of correctly predicting the average spontaneous activity levels in asynchronous irregular regimes similar to in vivo activity. It also captures the transient temporal response of the network to complex external inputs. Finally, the mean-field model is also able to quantitatively describe regimes where high- and low-activity states alternate (up-down state dynamics), leading to slow oscillations. We conclude that such mean-field models are biologically realistic in the sense that they can capture both spontaneous and evoked activity, and they naturally appear as candidates to build very large-scale models involving multiple brain areas.


Assuntos
Potenciais de Ação , Adaptação Fisiológica , Modelos Neurológicos , Neurônios/fisiologia , Animais , Simulação por Computador , Inibição Neural/fisiologia , Periodicidade
12.
Cereb Cortex ; 29(1): 319-335, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-29190336

RESUMO

Cortical slow oscillations (SO) of neural activity spontaneously emerge and propagate during deep sleep and anesthesia and are also expressed in isolated brain slices and cortical slabs. We lack full understanding of how SO integrate the different structural levels underlying local excitability of cell assemblies and their mutual interaction. Here, we focus on ongoing slow waves (SWs) in cortical slices reconstructed from a 16-electrode array designed to probe the neuronal activity at multiple spatial scales. In spite of the variable propagation patterns observed, we reproducibly found a smooth strip of loci leading the SW fronts, overlapping cortical layers 4 and 5, along which Up states were the longest and displayed the highest firing rate. Propagation modes were uncorrelated in time, signaling a memoryless generation of SWs. All these features could be modeled by a multimodular large-scale network of spiking neurons with a specific balance between local and intermodular connectivity. Modules work as relaxation oscillators with a weakly stable Down state and a peak of local excitability to model layers 4 and 5. These conditions allow for both optimal sensitivity to the network structure and richness of propagation modes, both of which are potential substrates for dynamic flexibility in more general contexts.


Assuntos
Potenciais de Ação/fisiologia , Ondas Encefálicas/fisiologia , Córtex Visual/citologia , Córtex Visual/fisiologia , Animais , Furões , Masculino , Neurônios/fisiologia , Técnicas de Cultura de Órgãos
13.
Phys Rev E ; 100(6-1): 062413, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31962518

RESUMO

More interest has been shown in recent years to large-scale spiking simulations of cerebral neuronal networks, coming both from the presence of high-performance computers and increasing details in experimental observations. In this context it is important to understand how population dynamics are generated by the designed parameters of the networks, which is the question addressed by mean-field theories. Despite analytic solutions for the mean-field dynamics already being proposed for current-based neurons (CUBA), a complete analytic description has not been achieved yet for more realistic neural properties, such as conductance-based (COBA) network of adaptive exponential neurons (AdEx). Here, we propose a principled approach to map a COBA on a CUBA. Such an approach provides a state-dependent approximation capable of reliably predicting the firing-rate properties of an AdEx neuron with noninstantaneous COBA integration. We also applied our theory to population dynamics, predicting the dynamical properties of the network in very different regimes, such as asynchronous irregular and synchronous irregular (slow oscillations). This result shows that a state-dependent approximation can be successfully introduced to take into account the subtle effects of COBA integration and to deal with a theory capable of correctly predicting the activity in regimes of alternating states like slow oscillations.


Assuntos
Modelos Neurológicos , Neurônios/citologia , Rede Nervosa/citologia
14.
Sci Rep ; 8(1): 17056, 2018 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-30451957

RESUMO

Inference methods are widely used to recover effective models from observed data. However, few studies attempted to investigate the dynamics of inferred models in neuroscience, and none, to our knowledge, at the network level. We introduce a principled modification of a widely used generalized linear model (GLM), and learn its structural and dynamic parameters from in-vitro spike data. The spontaneous activity of the new model captures prominent features of the non-stationary and non-linear dynamics displayed by the biological network, where the reference GLM largely fails, and also reflects fine-grained spatio-temporal dynamical features. Two ingredients were key for success. The first is a saturating transfer function: beyond its biological plausibility, it limits the neuron's information transfer, improving robustness against endogenous and external noise. The second is a super-Poisson spikes generative mechanism; it accounts for the undersampling of the network, and allows the model neuron to flexibly incorporate the observed activity fluctuations.


Assuntos
Potenciais de Ação , Redes Neurais de Computação , Simulação por Computador , Distribuição de Poisson
15.
Sci Rep ; 7: 39611, 2017 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-28045036

RESUMO

Neural field models are powerful tools to investigate the richness of spatiotemporal activity patterns like waves and bumps, emerging from the cerebral cortex. Understanding how spontaneous and evoked activity is related to the structure of underlying networks is of central interest to unfold how information is processed by these systems. Here we focus on the interplay between local properties like input-output gain function and recurrent synaptic self-excitation of cortical modules, and nonlocal intermodular synaptic couplings yielding to define a multiscale neural field. In this framework, we work out analytic expressions for the wave speed and the stochastic diffusion of propagating fronts uncovering the existence of an optimal balance between local and nonlocal connectivity which minimizes the fluctuations of the activation front propagation. Incorporating an activity-dependent adaptation of local excitability further highlights the independent role that local and nonlocal connectivity play in modulating the speed of propagation of the activation and silencing wavefronts, respectively. Inhomogeneities in space of local excitability give raise to a novel hysteresis phenomenon such that the speed of waves traveling in opposite directions display different velocities in the same location. Taken together these results provide insights on the multiscale organization of brain slow-waves measured during deep sleep and anesthesia.


Assuntos
Ondas Encefálicas , Córtex Cerebral/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Adaptação Fisiológica , Animais , Simulação por Computador , Humanos , Vias Neurais/fisiologia
16.
PLoS One ; 10(3): e0118412, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25807389

RESUMO

Biological networks display a variety of activity patterns reflecting a web of interactions that is complex both in space and time. Yet inference methods have mainly focused on reconstructing, from the network's activity, the spatial structure, by assuming equilibrium conditions or, more recently, a probabilistic dynamics with a single arbitrary time-step. Here we show that, under this latter assumption, the inference procedure fails to reconstruct the synaptic matrix of a network of integrate-and-fire neurons when the chosen time scale of interaction does not closely match the synaptic delay or when no single time scale for the interaction can be identified; such failure, moreover, exposes a distinctive bias of the inference method that can lead to infer as inhibitory the excitatory synapses with interaction time scales longer than the model's time-step. We therefore introduce a new two-step method, that first infers through cross-correlation profiles the delay-structure of the network and then reconstructs the synaptic matrix, and successfully test it on networks with different topologies and in different activity regimes. Although step one is able to accurately recover the delay-structure of the network, thus getting rid of any a priori guess about the time scales of the interaction, the inference method introduces nonetheless an arbitrary time scale, the time-bin dt used to binarize the spike trains. We therefore analytically and numerically study how the choice of dt affects the inference in our network model, finding that the relationship between the inferred couplings and the real synaptic efficacies, albeit being quadratic in both cases, depends critically on dt for the excitatory synapses only, whilst being basically independent of it for the inhibitory ones.


Assuntos
Simulação por Computador , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Potenciais de Ação/fisiologia , Inibição Neural/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA