Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Elife ; 122024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38695551

RESUMEN

Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.


Asunto(s)
Neuronas , Animales , Neuronas/fisiología , Aprendizaje Automático , Redes Neurales de la Computación , Aprendizaje , Región CA1 Hipocampal/fisiología , Región CA1 Hipocampal/citología , Ratas
2.
bioRxiv ; 2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38370656

RESUMEN

Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.

3.
PLoS Comput Biol ; 20(2): e1011852, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38315736

RESUMEN

Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.


Asunto(s)
Encéfalo , Memoria a Corto Plazo , Redes Neurales de la Computación
4.
PLoS Comput Biol ; 20(1): e1011784, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38241417

RESUMEN

Attractors play a key role in a wide range of processes including learning and memory. Due to recent innovations in recording methods, there is increasing evidence for the existence of attractor dynamics in the brain. Yet, our understanding of how these attractors emerge or disappear in a biological system is lacking. By following the spontaneous network bursts of cultured cortical networks, we are able to define a vocabulary of spatiotemporal patterns and show that they function as discrete attractors in the network dynamics. We show that electrically stimulating specific attractors eliminates them from the spontaneous vocabulary, while they are still robustly evoked by the electrical stimulation. This seemingly paradoxical finding can be explained by a Hebbian-like strengthening of specific pathways into the attractors, at the expense of weakening non-evoked pathways into the same attractors. We verify this hypothesis and provide a mechanistic explanation for the underlying changes supporting this effect.


Asunto(s)
Aprendizaje , Neuronas , Neuronas/fisiología , Aprendizaje/fisiología , Encéfalo
5.
Neuron ; 111(15): 2348-2356.e5, 2023 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-37315557

RESUMEN

Memories of past events can be recalled long after the event, indicating stability. But new experiences are also integrated into existing memories, indicating plasticity. In the hippocampus, spatial representations are known to remain stable but have also been shown to drift over long periods of time. We hypothesized that experience, more than the passage of time, is the driving force behind representational drift. We compared the within-day stability of place cells' representations in dorsal CA1 of the hippocampus of mice traversing two similar, familiar tracks for different durations. We found that the more time the animals spent actively traversing the environment, the greater the representational drift, regardless of the total elapsed time between visits. Our results suggest that spatial representation is a dynamic process, related to the ongoing experiences within a specific context, and is related to memory update rather than to passive forgetting.


Asunto(s)
Hipocampo , Células de Lugar , Ratones , Animales , Recuerdo Mental , Gravitación
6.
Curr Opin Neurobiol ; 80: 102721, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37043892

RESUMEN

Learning is a multi-faceted phenomenon of critical importance and hence attracted a great deal of research, both experimental and theoretical. In this review, we will consider some of the paradigmatic examples of learning and discuss the common themes in theoretical learning research, such as levels of modeling and their corresponding relation to experimental observations and mathematical ideas common to different types of learning.


Asunto(s)
Aprendizaje , Modelos Teóricos , Matemática
7.
Proc Natl Acad Sci U S A ; 120(12): e2216805120, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-36920920

RESUMEN

Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: "what does the system care about?". We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar "surrogate" data. We test the algorithm on four synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.


Asunto(s)
Algoritmos , Homeostasis
8.
Science ; 376(6590): 267-275, 2022 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-35420959

RESUMEN

Tuft dendrites of layer 5 pyramidal neurons form specialized compartments important for motor learning and performance, yet their computational capabilities remain unclear. Structural-functional mapping of the tuft tree from the motor cortex during motor tasks revealed two morphologically distinct populations of layer 5 pyramidal tract neurons (PTNs) that exhibit specific tuft computational properties. Early bifurcating and large nexus PTNs showed marked tuft functional compartmentalization, representing different motor variable combinations within and between their two tuft hemi-trees. By contrast, late bifurcating and smaller nexus PTNs showed synchronous tuft activation. Dendritic structure and dynamic recruitment of the N-methyl-d-aspartate (NMDA)-spiking mechanism explained the differential compartmentalization patterns. Our findings support a morphologically dependent framework for motor computations, in which independent amplification units can be combinatorically recruited to represent different motor sequences within the same tree.


Asunto(s)
Dendritas , Corteza Motora , Potenciales de Acción/fisiología , Dendritas/fisiología , Neuronas , Células Piramidales/fisiología
9.
iScience ; 25(3): 103924, 2022 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-35265809

RESUMEN

Drug resistance and metastasis-the major complications in cancer-both entail adaptation of cancer cells to stress, whether a drug or a lethal new environment. Intriguingly, these adaptive processes share similar features that cannot be explained by a pure Darwinian scheme, including dormancy, increased heterogeneity, and stress-induced plasticity. Here, we propose that learning theory offers a framework to explain these features and may shed light on these two intricate processes. In this framework, learning is performed at the single-cell level, by stress-driven exploratory trial-and-error. Such a process is not contingent on pre-existing pathways but on a random search for a state that diminishes the stress. We review underlying mechanisms that may support this search, and show by using a learning model that such exploratory learning is feasible in a high-dimensional system as the cell. At the population level, we view the tissue as a network of exploring agents that communicate, restraining cancer formation in health. In this view, disease results from the breakdown of homeostasis between cellular exploratory drive and tissue homeostasis.

10.
Isr Med Assoc J ; 23(7): 401-407, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34251120

RESUMEN

BACKGROUND: The coronavirus disease-2019 (COVID-19) pandemic forced drastic changes in all layers of life. Social distancing and lockdown drove the educational system to uncharted territories at an accelerated pace, leaving educators little time to adjust. OBJECTIVES: To describe changes in teaching during the first phase of the COVID-19 pandemic. METHODS: We described the steps implemented at the Technion-Israel Institute of Technology Faculty of Medicine during the initial 4 months of the COVID-19 pandemic to preserve teaching and the academic ecosystem. RESULTS: Several established methodologies, such as the flipped classroom and active learning, demonstrated effectiveness. In addition, we used creative methods to teach clinical medicine during the ban on bedside teaching and modified community engagement activities to meet COVID-19 induced community needs. CONCLUSIONS: The challenges and the lessons learned from teaching during the COVID-19 pandemic prompted us to adjust our teaching methods and curriculum using multiple online teaching methods and promoting self-learning. It also provided invaluable insights on our pedagogy and the teaching of medicine in the future with emphasis on students and faculty being part of the changes and adjustments in curriculum and teaching methods. However, personal interactions are essential to medical school education, as are laboratories, group simulations, and bedside teaching.


Asunto(s)
COVID-19 , Educación a Distancia , Educación Médica , Distanciamiento Físico , COVID-19/epidemiología , COVID-19/prevención & control , Control de Enfermedades Transmisibles/métodos , Educación a Distancia/métodos , Educación a Distancia/organización & administración , Educación Médica/organización & administración , Educación Médica/tendencias , Humanos , Evaluación de Necesidades , Innovación Organizacional , Evaluación de Resultado en la Atención de Salud , SARS-CoV-2 , Facultades de Medicina , Enseñanza/tendencias
11.
Neural Comput ; 33(3): 827-852, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33513322

RESUMEN

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity-the neural engineering framework. We analytically solve the framework for the classic ring model-a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.


Asunto(s)
Redes Neurales de la Computación
12.
PLoS One ; 15(9): e0238433, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32881964

RESUMEN

Phenotypic switches are associated with alterations in the cell's gene expression profile and are vital to many aspects of biology. Previous studies have identified local motifs of the genetic regulatory network that could underlie such switches. Recent advancements allowed the study of networks at the global, many-gene, level; however, the relationship between the local and global scales in giving rise to phenotypic switches remains elusive. In this work, we studied the epithelial-mesenchymal transition (EMT) using a gene regulatory network model. This model supports two clusters of stable steady-states identified with the epithelial and mesenchymal phenotypes, and a range of intermediate less stable hybrid states, whose importance in cancer has been recently highlighted. Using an array of network perturbations and quantifying the resulting landscape, we investigated how features of the network at different levels give rise to these landscape properties. We found that local connectivity patterns affect the landscape in a mostly incremental manner; in particular, a specific previously identified double-negative feedback motif is not required when embedded in the full network, because the landscape is maintained at a global level. Nevertheless, despite the distributed nature of the switch, it is possible to find combinations of a few local changes that disrupt it. At the level of network architecture, we identified a crucial role for peripheral genes that act as incoming signals to the network in creating clusters of states. Such incoming signals are a signature of modularity and are expected to appear also in other biological networks. Hybrid states between epithelial and mesenchymal arise in the model due to barriers in the interaction between genes, causing hysteresis at all connections. Our results suggest emergent switches can neither be pinpointed to local motifs, nor do they arise as typical properties of random network ensembles. Rather, they arise through an interplay between the nature of local interactions, and the core-periphery structure induced by the modularity of the cell.


Asunto(s)
Variación Biológica Poblacional/genética , Transición Epitelial-Mesenquimal/genética , Redes Reguladoras de Genes/genética , Retroalimentación Fisiológica/fisiología , Humanos , Modelos Biológicos , Modelos Genéticos , Modelos Estadísticos , Fenotipo
13.
Neuron ; 107(5): 954-971.e9, 2020 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-32589878

RESUMEN

Adaptive movements are critical for animal survival. To guide future actions, the brain monitors various outcomes, including achievement of movement and appetitive goals. The nature of these outcome signals and their neuronal and network realization in the motor cortex (M1), which directs skilled movements, is largely unknown. Using a dexterity task, calcium imaging, optogenetic perturbations, and behavioral manipulations, we studied outcome signals in the murine forelimb M1. We found two populations of layer 2-3 neurons, termed success- and failure-related neurons, that develop with training, and report end results of trials. In these neurons, prolonged responses were recorded after success or failure trials independent of reward and kinematics. In addition, the initial state of layer 5 pyramidal tract neurons contained a memory trace of the previous trial's outcome. Intertrial cortical activity was needed to learn new task requirements. These M1 layer-specific performance outcome signals may support reinforcement motor learning of skilled behavior.


Asunto(s)
Aprendizaje/fisiología , Corteza Motora/citología , Corteza Motora/fisiología , Destreza Motora/fisiología , Células Piramidales/citología , Células Piramidales/fisiología , Animales , Masculino , Ratones , Ratones Endogámicos C57BL
14.
PLoS Comput Biol ; 16(5): e1007825, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32392249

RESUMEN

Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics.


Asunto(s)
Biología Computacional/métodos , Retroalimentación , Redes Reguladoras de Genes/fisiología , Modelos Estadísticos , Modelos Teóricos , Simulación de Dinámica Molecular , Probabilidad , Análisis de Sistemas
15.
Nat Commun ; 10(1): 4441, 2019 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-31570719

RESUMEN

What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures.


Asunto(s)
Memoria/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Sinapsis/fisiología , Algoritmos , Simulación por Computador , Homeostasis , Aprendizaje , Memoria a Largo Plazo/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Dinámicas no Lineales , Programas Informáticos
16.
Neural Comput ; 31(10): 1985-2003, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31393826

RESUMEN

Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network-a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning-one trial at a time-has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Animales
17.
Curr Biol ; 27(15): 2337-2343.e3, 2017 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-28756950

RESUMEN

The brain has an extraordinary ability to create an internal spatial map of the external world [1]. This map-like representation of environmental surroundings is encoded through specific types of neurons, located within the hippocampus and entorhinal cortex, which exhibit spatially tuned firing patterns [2, 3]. In addition to encoding space, these neurons are believed to be related to contextual information and memory [4-7]. One class of such cells is the grid cells, which are located within the entorhinal cortex, presubiculum, and parasubiculum [3, 8]. Grid cell firing forms a hexagonal array of firing fields, a pattern that is largely thought to reflect the operation of intrinsic self-motion-related computations [9-12]. If this is the case, then fields should be relatively uniform in size, number of spikes, and peak firing rate. However, it has been suggested that this is not in fact the case [3, 13]. The possibility exists that local spatial information also influences grid cells, which-if true-would greatly change the way in which grid cells are thought to contribute to place coding. Accordingly, we asked how discriminable the individual fields of a given grid cell are by looking at the distribution of field firing rates and reproducibility of this distribution across trials. Grid fields were less uniform in intensity than expected, and the pattern of strong and weak fields was spatially stable and recurred across trials. The distribution remained unchanged even after arena rescaling, but not after remapping. This suggests that additional local information is being overlaid onto the global hexagonal pattern of grid cells.


Asunto(s)
Corteza Entorrinal/fisiología , Células de Red/fisiología , Percepción Espacial/fisiología , Animales , Masculino , Ratas , Ratas Long-Evans , Sinapsis/fisiología
18.
Phys Rev Lett ; 118(25): 258101, 2017 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-28696758

RESUMEN

Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

19.
Curr Opin Neurobiol ; 46: 1-6, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28668365

RESUMEN

Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input-output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Animales , Humanos , Neurociencias/métodos , Neurociencias/tendencias
20.
J Neurosci ; 37(17): 4508-4524, 2017 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-28348138

RESUMEN

Action potentials, taking place over milliseconds, are the basis of neural computation. However, the dynamics of excitability over longer, behaviorally relevant timescales remain underexplored. A recent experiment used long-term recordings from single neurons to reveal multiple timescale fluctuations in response to constant stimuli, along with more reliable responses to variable stimuli. Here, we demonstrate that this apparent paradox is resolved if neurons operate in a marginally stable dynamic regime, which we reveal using a novel inference method. Excitability in this regime is characterized by large fluctuations while retaining high sensitivity to external varying stimuli. A new model with a dynamic recovery timescale that interacts with excitability captures this dynamic regime and predicts the neurons' response with high accuracy. The model explains most experimental observations under several stimulus statistics. The compact structure of our model permits further exploration on the network level.SIGNIFICANCE STATEMENT Excitability is the basis for all neural computations and its long-term dynamics reveal a complex combination of many timescales. We discovered that neural excitability operates under a marginally stable regime in which the system is dominated by internal fluctuation while retaining high sensitivity to externally varying stimuli. We offer a novel approach to modeling excitability dynamics by assuming that the recovery timescale is itself a dynamic variable. Our model is able to capture a wide range of experimental phenomena using few parameters with significantly higher predictive power than previous models.


Asunto(s)
Potenciales de Acción/fisiología , Fenómenos Electrofisiológicos/fisiología , Neuronas/fisiología , Algoritmos , Humanos , Modelos Neurológicos , Red Nerviosa/fisiología , Dinámicas no Lineales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA