Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(12): e2216805120, 2023 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-36920920

RESUMO

Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: "what does the system care about?". We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar "surrogate" data. We test the algorithm on four synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.


Assuntos
Algoritmos , Homeostase
2.
PLoS Comput Biol ; 20(1): e1011784, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38241417

RESUMO

Attractors play a key role in a wide range of processes including learning and memory. Due to recent innovations in recording methods, there is increasing evidence for the existence of attractor dynamics in the brain. Yet, our understanding of how these attractors emerge or disappear in a biological system is lacking. By following the spontaneous network bursts of cultured cortical networks, we are able to define a vocabulary of spatiotemporal patterns and show that they function as discrete attractors in the network dynamics. We show that electrically stimulating specific attractors eliminates them from the spontaneous vocabulary, while they are still robustly evoked by the electrical stimulation. This seemingly paradoxical finding can be explained by a Hebbian-like strengthening of specific pathways into the attractors, at the expense of weakening non-evoked pathways into the same attractors. We verify this hypothesis and provide a mechanistic explanation for the underlying changes supporting this effect.


Assuntos
Aprendizagem , Neurônios , Neurônios/fisiologia , Aprendizagem/fisiologia , Encéfalo
3.
PLoS Comput Biol ; 20(2): e1011852, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38315736

RESUMO

Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.


Assuntos
Encéfalo , Memória de Curto Prazo , Redes Neurais de Computação
4.
Neural Comput ; 33(3): 827-852, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33513322

RESUMO

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity-the neural engineering framework. We analytically solve the framework for the classic ring model-a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.


Assuntos
Redes Neurais de Computação
5.
PLoS Comput Biol ; 16(5): e1007825, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32392249

RESUMO

Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics.


Assuntos
Biologia Computacional/métodos , Retroalimentação , Redes Reguladoras de Genes/fisiologia , Modelos Estatísticos , Modelos Teóricos , Simulação de Dinâmica Molecular , Probabilidade , Análise de Sistemas
6.
Isr Med Assoc J ; 23(7): 401-407, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34251120

RESUMO

BACKGROUND: The coronavirus disease-2019 (COVID-19) pandemic forced drastic changes in all layers of life. Social distancing and lockdown drove the educational system to uncharted territories at an accelerated pace, leaving educators little time to adjust. OBJECTIVES: To describe changes in teaching during the first phase of the COVID-19 pandemic. METHODS: We described the steps implemented at the Technion-Israel Institute of Technology Faculty of Medicine during the initial 4 months of the COVID-19 pandemic to preserve teaching and the academic ecosystem. RESULTS: Several established methodologies, such as the flipped classroom and active learning, demonstrated effectiveness. In addition, we used creative methods to teach clinical medicine during the ban on bedside teaching and modified community engagement activities to meet COVID-19 induced community needs. CONCLUSIONS: The challenges and the lessons learned from teaching during the COVID-19 pandemic prompted us to adjust our teaching methods and curriculum using multiple online teaching methods and promoting self-learning. It also provided invaluable insights on our pedagogy and the teaching of medicine in the future with emphasis on students and faculty being part of the changes and adjustments in curriculum and teaching methods. However, personal interactions are essential to medical school education, as are laboratories, group simulations, and bedside teaching.


Assuntos
COVID-19 , Educação a Distância , Educação Médica , Distanciamento Físico , COVID-19/epidemiologia , COVID-19/prevenção & controle , Controle de Doenças Transmissíveis/métodos , Educação a Distância/métodos , Educação a Distância/organização & administração , Educação Médica/organização & administração , Educação Médica/tendências , Humanos , Avaliação das Necessidades , Inovação Organizacional , Avaliação de Resultados em Cuidados de Saúde , SARS-CoV-2 , Faculdades de Medicina , Ensino/tendências
7.
Neural Comput ; 31(10): 1985-2003, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31393826

RESUMO

Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network-a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning-one trial at a time-has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Animais
8.
Nature ; 497(7451): 585-90, 2013 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-23685452

RESUMO

Single-neuron activity in the prefrontal cortex (PFC) is tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and therefore difficult to interpret. We analysed the neural activity recorded in monkeys during an object sequence memory task to identify a role of mixed selectivity in subserving the cognitive functions ascribed to the PFC. We show that mixed selectivity neurons encode distributed information about all task-relevant aspects. Each aspect can be decoded from the population of neurons even when single-cell selectivity to that aspect is eliminated. Moreover, mixed selectivity offers a significant computational advantage over specialized responses in terms of the repertoire of input-output functions implementable by readout neurons. This advantage originates from the highly diverse nonlinear selectivity to mixtures of task-relevant variables, a signature of high-dimensional neural representations. Crucially, this dimensionality is predictive of animal behaviour as it collapses in error trials. Our findings recommend a shift of focus for future studies from neurons that have easily interpretable response tuning to the widely observed, but rarely analysed, mixed selectivity neurons.


Assuntos
Cognição/fisiologia , Haplorrinos/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Córtex Pré-Frontal/citologia , Córtex Pré-Frontal/fisiologia , Animais , Comportamento Animal/fisiologia , Memória/fisiologia , Análise de Célula Única
9.
J Neurosci ; 37(17): 4508-4524, 2017 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-28348138

RESUMO

Action potentials, taking place over milliseconds, are the basis of neural computation. However, the dynamics of excitability over longer, behaviorally relevant timescales remain underexplored. A recent experiment used long-term recordings from single neurons to reveal multiple timescale fluctuations in response to constant stimuli, along with more reliable responses to variable stimuli. Here, we demonstrate that this apparent paradox is resolved if neurons operate in a marginally stable dynamic regime, which we reveal using a novel inference method. Excitability in this regime is characterized by large fluctuations while retaining high sensitivity to external varying stimuli. A new model with a dynamic recovery timescale that interacts with excitability captures this dynamic regime and predicts the neurons' response with high accuracy. The model explains most experimental observations under several stimulus statistics. The compact structure of our model permits further exploration on the network level.SIGNIFICANCE STATEMENT Excitability is the basis for all neural computations and its long-term dynamics reveal a complex combination of many timescales. We discovered that neural excitability operates under a marginally stable regime in which the system is dominated by internal fluctuation while retaining high sensitivity to externally varying stimuli. We offer a novel approach to modeling excitability dynamics by assuming that the recovery timescale is itself a dynamic variable. Our model is able to capture a wide range of experimental phenomena using few parameters with significantly higher predictive power than previous models.


Assuntos
Potenciais de Ação/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Neurônios/fisiologia , Algoritmos , Humanos , Modelos Neurológicos , Rede Nervosa/fisiologia , Dinâmica não Linear
10.
PLoS Comput Biol ; 13(1): e1005338, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28099436

RESUMO

Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.


Assuntos
Nervo Coclear/fisiologia , Modelos Neurológicos , Fibras Nervosas/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Biologia Computacional , Humanos , Música
11.
Phys Rev Lett ; 118(25): 258101, 2017 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-28696758

RESUMO

Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

12.
Hippocampus ; 25(12): 1599-613, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26105192

RESUMO

Navigation requires integration of external and internal inputs to form a representation of location. Part of this integration is considered to be carried out by the grid cells network in the medial entorhinal cortex (MEC). However, the structure of this neural network is unknown. To shed light on this structure, we measured noise correlations between 508 pairs of simultaneous previously recorded grid cells. We differentiated between pure grid and conjunctive cells (pure grid in Layers II, III, and VI vs. conjunctive in Layers III and V--only Layer III was bi-modal), and devised a new method to classify cell pairs as belonging/not-belonging to the same module. We found that pairs from the same module show significantly more correlations than pairs from different modules. The correlations between pure grid cells decreased in strength as their relative spatial phase increased. However, correlations were mostly at 0 time-lag, suggesting that the source of correlations was not only synaptic, but rather resulted mostly from common input. Given our measured correlations, the two functional groups of grid cells (pure vs. conjunctive), and the known disorganized recurrent connections within Layer II, we propose the following model: conjunctive cells in deep layers form an attractor network whose activity is governed by velocity-controlled signals. A second manifold in Layer II receives organized feedforward projections from the deep layers, giving rise to pure grid cells. Numerical simulations indicate that organized projections induce such correlations as we measure in superficial layers. Our results provide new evidence for the functional anatomy of the entorhinal circuit-suggesting that strong phase-organized feedforward projections support grid fields in the superficial layers.


Assuntos
Córtex Entorrinal/citologia , Córtex Entorrinal/fisiologia , Neurônios/citologia , Neurônios/fisiologia , Navegação Espacial/fisiologia , Potenciais de Ação , Animais , Análise por Conglomerados , Simulação por Computador , Conjuntos de Dados como Assunto , Comportamento Exploratório/fisiologia , Movimentos da Cabeça/fisiologia , Internet , Microeletrodos , Modelos Neurológicos , Vias Neurais/citologia , Vias Neurais/fisiologia , Ratos , Processamento de Sinais Assistido por Computador , Ritmo Teta
13.
J Neurosci ; 33(9): 3844-56, 2013 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-23447596

RESUMO

Intelligent behavior requires integrating several sources of information in a meaningful fashion-be it context with stimulus or shape with color and size. This requires the underlying neural mechanism to respond in a different manner to similar inputs (discrimination), while maintaining a consistent response for noisy variations of the same input (generalization). We show that neurons that mix information sources via random connectivity can form an easy to read representation of input combinations. Using analytical and numerical tools, we show that the coding level or sparseness of these neurons' activity controls a trade-off between generalization and discrimination, with the optimal level depending on the task at hand. In all realistic situations that we analyzed, the optimal fraction of inputs to which a neuron responds is close to 0.1. Finally, we predict a relation between a measurable property of the neural representation and task performance.


Assuntos
Discriminação Psicológica/fisiologia , Generalização Psicológica/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Percepção de Cores , Humanos , Rede Nervosa/fisiologia , Neurônios/classificação , Estimulação Luminosa , Percepção de Tamanho
14.
Elife ; 122024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38695551

RESUMO

Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.


Assuntos
Neurônios , Animais , Neurônios/fisiologia , Aprendizado de Máquina , Redes Neurais de Computação , Aprendizagem , Região CA1 Hipocampal/fisiologia , Região CA1 Hipocampal/citologia , Ratos
15.
bioRxiv ; 2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38370656

RESUMO

Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.

16.
Neural Comput ; 25(3): 626-49, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23272922

RESUMO

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


Assuntos
Inteligência Artificial , Redes Neurais de Computação
17.
Curr Opin Neurobiol ; 80: 102721, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37043892

RESUMO

Learning is a multi-faceted phenomenon of critical importance and hence attracted a great deal of research, both experimental and theoretical. In this review, we will consider some of the paradigmatic examples of learning and discuss the common themes in theoretical learning research, such as levels of modeling and their corresponding relation to experimental observations and mathematical ideas common to different types of learning.


Assuntos
Aprendizagem , Modelos Teóricos , Matemática
18.
Neuron ; 111(15): 2348-2356.e5, 2023 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-37315557

RESUMO

Memories of past events can be recalled long after the event, indicating stability. But new experiences are also integrated into existing memories, indicating plasticity. In the hippocampus, spatial representations are known to remain stable but have also been shown to drift over long periods of time. We hypothesized that experience, more than the passage of time, is the driving force behind representational drift. We compared the within-day stability of place cells' representations in dorsal CA1 of the hippocampus of mice traversing two similar, familiar tracks for different durations. We found that the more time the animals spent actively traversing the environment, the greater the representational drift, regardless of the total elapsed time between visits. Our results suggest that spatial representation is a dynamic process, related to the ongoing experiences within a specific context, and is related to memory update rather than to passive forgetting.


Assuntos
Hipocampo , Células de Lugar , Camundongos , Animais , Rememoração Mental , Gravitação
19.
iScience ; 25(3): 103924, 2022 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-35265809

RESUMO

Drug resistance and metastasis-the major complications in cancer-both entail adaptation of cancer cells to stress, whether a drug or a lethal new environment. Intriguingly, these adaptive processes share similar features that cannot be explained by a pure Darwinian scheme, including dormancy, increased heterogeneity, and stress-induced plasticity. Here, we propose that learning theory offers a framework to explain these features and may shed light on these two intricate processes. In this framework, learning is performed at the single-cell level, by stress-driven exploratory trial-and-error. Such a process is not contingent on pre-existing pathways but on a random search for a state that diminishes the stress. We review underlying mechanisms that may support this search, and show by using a learning model that such exploratory learning is feasible in a high-dimensional system as the cell. At the population level, we view the tissue as a network of exploring agents that communicate, restraining cancer formation in health. In this view, disease results from the breakdown of homeostasis between cellular exploratory drive and tissue homeostasis.

20.
Science ; 376(6590): 267-275, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35420959

RESUMO

Tuft dendrites of layer 5 pyramidal neurons form specialized compartments important for motor learning and performance, yet their computational capabilities remain unclear. Structural-functional mapping of the tuft tree from the motor cortex during motor tasks revealed two morphologically distinct populations of layer 5 pyramidal tract neurons (PTNs) that exhibit specific tuft computational properties. Early bifurcating and large nexus PTNs showed marked tuft functional compartmentalization, representing different motor variable combinations within and between their two tuft hemi-trees. By contrast, late bifurcating and smaller nexus PTNs showed synchronous tuft activation. Dendritic structure and dynamic recruitment of the N-methyl-d-aspartate (NMDA)-spiking mechanism explained the differential compartmentalization patterns. Our findings support a morphologically dependent framework for motor computations, in which independent amplification units can be combinatorically recruited to represent different motor sequences within the same tree.


Assuntos
Dendritos , Córtex Motor , Potenciais de Ação/fisiologia , Dendritos/fisiologia , Neurônios , Células Piramidais/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA