Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(12): e2216805120, 2023 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-36920920

RESUMO

Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: "what does the system care about?". We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar "surrogate" data. We test the algorithm on four synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.


Assuntos
Algoritmos , Homeostase
2.
Neuron ; 107(5): 954-971.e9, 2020 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-32589878

RESUMO

Adaptive movements are critical for animal survival. To guide future actions, the brain monitors various outcomes, including achievement of movement and appetitive goals. The nature of these outcome signals and their neuronal and network realization in the motor cortex (M1), which directs skilled movements, is largely unknown. Using a dexterity task, calcium imaging, optogenetic perturbations, and behavioral manipulations, we studied outcome signals in the murine forelimb M1. We found two populations of layer 2-3 neurons, termed success- and failure-related neurons, that develop with training, and report end results of trials. In these neurons, prolonged responses were recorded after success or failure trials independent of reward and kinematics. In addition, the initial state of layer 5 pyramidal tract neurons contained a memory trace of the previous trial's outcome. Intertrial cortical activity was needed to learn new task requirements. These M1 layer-specific performance outcome signals may support reinforcement motor learning of skilled behavior.


Assuntos
Aprendizagem/fisiologia , Córtex Motor/citologia , Córtex Motor/fisiologia , Destreza Motora/fisiologia , Células Piramidais/citologia , Células Piramidais/fisiologia , Animais , Masculino , Camundongos , Camundongos Endogâmicos C57BL
3.
Neural Comput ; 32(4): 794-828, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32069175

RESUMO

Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE: Fisher information, maximum likelihood estimation error, and the Bayesian Cramér-Rao bound. We find that optimization of these measures yields qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Teorema de Bayes , Humanos
4.
Neural Comput ; 30(8): 2056-2112, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29949463

RESUMO

Neural decoding may be formulated as dynamic state estimation (filtering) based on point-process observations, a generally intractable problem. Numerical sampling techniques are often practically useful for the decoding of real neural data. However, they are less useful as theoretical tools for modeling and understanding sensory neural systems, since they lead to limited conceptual insight into optimal encoding and decoding strategies. We consider sensory neural populations characterized by a distribution over neuron parameters. We develop an analytically tractable Bayesian approximation to optimal filtering based on the observation of spiking activity that greatly facilitates the analysis of optimal encoding in situations deviating from common assumptions of uniform coding. Continuous distributions are used to approximate large populations with few parameters, resulting in a filter whose complexity does not grow with population size and allowing optimization of population parameters rather than individual tuning functions. Numerical comparison with particle filtering demonstrates the quality of the approximation. The analytic framework leads to insights that are difficult to obtain from numerical algorithms and is consistent with biological observations about the distribution of sensory cells' preferred stimuli.

5.
Elife ; 5: e10094, 2016 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-26952211

RESUMO

Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is -1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA.


Assuntos
Células de Grade/fisiologia , Hipocampo/fisiologia , Rede Nervosa , Células de Lugar/fisiologia , Percepção Espacial , Simulação por Computador , Análise de Componente Principal
6.
Artigo em Inglês | MEDLINE | ID: mdl-24744724

RESUMO

Long term temporal correlations frequently appear at many levels of neural activity. We show that when such correlations appear in isolated neurons, they indicate the existence of slow underlying processes and lead to explicit conditions on the dynamics of these processes. Moreover, although these slow processes can potentially store information for long times, we demonstrate that this does not imply that the neuron possesses a long memory of its input, even if these processes are bidirectionally coupled with neuronal response. We derive these results for a broad class of biophysical neuron models, and then fit a specific model to recent experiments. The model reproduces the experimental results, exhibiting long term (days-long) correlations due to the interaction between slow variables and internal fluctuations. However, its memory of the input decays on a timescale of minutes. We suggest experiments to test these predictions directly.

7.
Artigo em Inglês | MEDLINE | ID: mdl-24765073

RESUMO

Many biological systems are modulated by unknown slow processes. This can severely hinder analysis - especially in excitable neurons, which are highly non-linear and stochastic systems. We show the analysis simplifies considerably if the input matches the sparse "spiky" nature of the output. In this case, a linearized spiking Input-Output (I/O) relation can be derived semi-analytically, relating input spike trains to output spikes based on known biophysical properties. Using this I/O relation we obtain closed-form expressions for all second order statistics (input - internal state - output correlations and spectra), construct optimal linear estimators for the neuronal response and internal state and perform parameter identification. These results are guaranteed to hold, for a general stochastic biophysical neuron model, with only a few assumptions (mainly, timescale separation). We numerically test the resulting expressions for various models, and show that they hold well, even in cases where our assumptions fail to hold. In a companion paper we demonstrate how this approach enables us to fit a biophysical neuron model so it reproduces experimentally observed temporal firing statistics on days-long experiments.

8.
Artigo em Inglês | MEDLINE | ID: mdl-22355288

RESUMO

In recent experiments, synaptically isolated neurons from rat cortical culture, were stimulated with periodic extracellular fixed-amplitude current pulses for extended durations of days. The neuron's response depended on its own history, as well as on the history of the input, and was classified into several modes. Interestingly, in one of the modes the neuron behaved intermittently, exhibiting irregular firing patterns changing in a complex and variable manner over the entire range of experimental timescales, from seconds to days. With the aim of developing a minimal biophysical explanation for these results, we propose a general scheme, that, given a few assumptions (mainly, a timescale separation in kinetics) closely describes the response of deterministic conductance-based neuron models under pulse stimulation, using a discrete time piecewise linear mapping, which is amenable to detailed mathematical analysis. Using this method we reproduce the basic modes exhibited by the neuron experimentally, as well as the mean response in each mode. Specifically, we derive precise closed-form input-output expressions for the transient timescale and firing rates, which are expressed in terms of experimentally measurable variables, and conform with the experimental results. However, the mathematical analysis shows that the resulting firing patterns in these deterministic models are always regular and repeatable (i.e., no chaos), in contrast to the irregular and variable behavior displayed by the neuron in certain regimes. This fact, and the sensitive near-threshold dynamics of the model, indicate that intrinsic ion channel noise has a significant impact on the neuronal response, and may help reproduce the experimentally observed variability, as we also demonstrate numerically. In a companion paper, we extend our analysis to stochastic conductance-based models, and show how these can be used to reproduce the details of the observed irregular and variable neuronal response.

9.
Biol Cybern ; 105(1): 41-53, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21796403

RESUMO

Biological motor control provides highly effective solutions to difficult control problems in spite of the complexity of the plant and the significant delays in sensory feedback. Such delays are expected to lead to non trivial stability issues and lack of robustness of control solutions. However, such difficulties are not observed in biological systems under normal operating conditions. Based on early suggestions in the control literature, a possible solution to this conundrum has been the suggestion that the motor system contains within itself a forward model of the plant (e.g., the arm), which allows the system to 'simulate' and predict the effect of applying a control signal. In this work, we formally define the notion of a forward model for deterministic control problems, and provide simple conditions that imply its existence for tasks involving delayed feedback control. As opposed to previous work which dealt mostly with linear plants and quadratic cost functions, our results apply to rather generic control systems, showing that any controller (biological or otherwise) which solves a set of tasks, must contain within itself a forward plant model. We suggest that our results provide strong theoretical support for the necessity of forward models in many delayed control problems, implying that they are not only useful, but rather, mandatory, under general conditions.


Assuntos
Retroalimentação , Modelos Biológicos , Movimento , Algoritmos , Animais , Braço/anatomia & histologia , Braço/fisiologia , Simulação por Computador , Humanos
10.
Front Comput Neurosci ; 4: 130, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-21079749

RESUMO

Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.

11.
Artigo em Inglês | MEDLINE | ID: mdl-20725633

RESUMO

Recent experiments have demonstrated that the timescale of adaptation of single neurons and ion channel populations to stimuli slows down as the length of stimulation increases; in fact, no upper bound on temporal timescales seems to exist in such systems. Furthermore, patch clamp experiments on single ion channels have hinted at the existence of large, mostly unobservable, inactivation state spaces within a single ion channel. This raises the question of the relation between this multitude of inactivation states and the observed behavior. In this work we propose a minimal model for ion channel dynamics which does not assume any specific structure of the inactivation state space. The model is simple enough to render an analytical study possible. This leads to a clear and concise explanation of the experimentally observed exponential history-dependent relaxation in sodium channels in a voltage clamp setting, and shows that their recovery rate from slow inactivation must be voltage dependent. Furthermore, we predict that history-dependent relaxation cannot be created by overly sparse spiking activity. While the model was created with ion channel populations in mind, its simplicity and genericalness render it a good starting point for modeling similar effects in other systems, and for scaling up to higher levels such as single neurons which are also known to exhibit multiple time scales.

12.
J Neurosci ; 30(28): 9588-96, 2010 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-20631187

RESUMO

Neural representation is pivotal in neuroscience. Yet, the large number and variance of underlying determinants make it difficult to distinguish general physiologic constraints on representation. Here we offer a general approach to the issue, enabling a systematic and well controlled experimental analysis of constraints and tradeoffs, imposed by the physiology of neuronal populations, on plausible representation schemes. Using in vitro networks of rat cortical neurons as a model system, we compared the efficacy of different kinds of "neural codes" to represent both spatial and temporal input features. Two rate-based representation schemes and two time-based representation schemes were considered. Our results indicate that, by large, all representation schemes perform well in the various discrimination tasks tested, indicating the inherent redundancy in neural population activity; Nevertheless, differences in representation efficacy are identified when unique aspects of input features are considered. We discuss these differences in the context of neural population dynamics.


Assuntos
Córtex Cerebral/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Animais Recém-Nascidos , Células Cultivadas , Estimulação Elétrica , Eletrofisiologia , Modelos Neurológicos , Ratos , Ratos Sprague-Dawley
13.
Artigo em Inglês | MEDLINE | ID: mdl-19503751

RESUMO

In this perspective we provide an example for the limits of reverse engineering in neuroscience. We demonstrate that application of reverse engineering to the study of the design principle of a functional neuro-system with a known mechanism, may result in a perfectly valid but wrong induction of the system's design principle. If in the very simple setup we bring here (static environment, primitive task and practically unlimited access to every piece of relevant information), it is difficult to induce a design principle, what are our chances of exposing biological design principles when more realistic conditions are examined? Implications to the way we do Biology are discussed.

14.
Neural Comput ; 21(4): 1100-24, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19018702

RESUMO

Oscillations are a ubiquitous feature of many neural systems, spanning many orders of magnitude in frequency. One of the most prominent oscillatory patterns, with possible functional implications, is that occurring in the mammalian thalamocortical system during sleep. This system is characterized by relatively long delays (reaching up to 40 msec) and gives rise to low-frequency oscillatory waves. Motivated by these phenomena, we study networks of excitatory and inhibitory integrate-and-fire neurons within a Fokker-Planck delay partial differential equation formalism and establish explicit conditions for the emergence of oscillatory solutions, and for the amplitude and period of the ensuing oscillations, for relatively large values of the delays. When a two-timescale analysis is employed, the full partial differential equation is replaced in this limit by a discrete time iterative map, leading to a relatively simple dynamic interpretation. This asymptotic result is shown numerically to hold, to a good approximation, over a wide range of parameter values, leading to an accurate characterization of the behavior in terms of the underlying physical parameters. Our results provide a simple mechanistic explanation for one type of slow oscillation based on delayed inhibition, which may play an important role in the slow spindle oscillations occurring during sleep. Moreover, they are consistent with experimental findings related to human motor behavior with visual feedback.


Assuntos
Potenciais de Ação/fisiologia , Relógios Biológicos/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , Inibição Neural/fisiologia , Tempo de Reação/fisiologia , Fatores de Tempo
15.
Neural Comput ; 21(5): 1277-320, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19018706

RESUMO

A key requirement facing organisms acting in uncertain dynamic environments is the real-time estimation and prediction of environmental states, based on which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bayesian statistical calculations for these purposes, it is far less clear how these putative computations are implemented by neural networks in a strictly dynamic setting. In this work, we make use of rigorous mathematical results from the theory of continuous time point process filtering and show how optimal real-time state estimation and prediction may be implemented in a general setting using simple recurrent neural networks. The framework is applicable to many situations of common interest, including noisy observations, non-Poisson spike trains (incorporating adaptation), multisensory integration, and state prediction. The optimal network properties are shown to relate to the statistical structure of the environment, and the benefits of adaptation are studied and explicitly demonstrated. Finally, we recover several existing results as appropriate limits of our general setting.


Assuntos
Potenciais de Ação/fisiologia , Adaptação Fisiológica/fisiologia , Teorema de Bayes , Redes Neurais de Computação , Ruído , Células Receptoras Sensoriais/fisiologia , Animais , Simulação por Computador , Cadeias de Markov , Modelos Neurológicos , Modelos Estatísticos , Vias Neurais/fisiologia
16.
PLoS Comput Biol ; 4(2): e29, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-18282084

RESUMO

Biological systems often change their responsiveness when subject to persistent stimulation, a phenomenon termed adaptation. In neural systems, this process is often selective, allowing the system to adapt to one stimulus while preserving its sensitivity to another. In some studies, it has been shown that adaptation to a frequent stimulus increases the system's sensitivity to rare stimuli. These phenomena were explained in previous work as a result of complex interactions between the various subpopulations of the network. A formal description and analysis of neuronal systems, however, is hindered by the network's heterogeneity and by the multitude of processes taking place at different time-scales. Viewing neural networks as populations of interacting elements, we develop a framework that facilitates a formal analysis of complex, structured, heterogeneous networks. The formulation developed is based on an analysis of the availability of activity dependent resources, and their effects on network responsiveness. This approach offers a simple mechanistic explanation for selective adaptation, and leads to several predictions that were corroborated in both computer simulations and in cultures of cortical neurons developing in vitro. The framework is sufficiently general to apply to different biological systems, and was demonstrated in two different cases.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Adaptação Fisiológica/fisiologia , Animais , Simulação por Computador , Humanos
17.
Neural Comput ; 19(8): 2245-79, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-17571943

RESUMO

Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Reforço Psicológico , Sinapses/fisiologia , Animais , Redes Neurais de Computação , Fatores de Tempo
18.
J Neurophysiol ; 97(5): 3736-50, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17360816

RESUMO

What determines the specific pattern of activation of primary motor cortex (M1) neurons in the context of a given motor task? We present a systems level physiological model describing the transformation from the neural activity in M1, through the muscle control signal, into joint torques and down to endpoint forces and movements. The redundancy of the system is resolved by biologically plausible optimization criteria. The model explains neural activity at both the population, and single neuron, levels. Due to the model's relative simplicity and analytic tractability, it provides intuition as to the most salient features of the system as well as a possible causal explanation of how these determine the overall behavior. Moreover, it explains a large number of recent observations, including the temporal patterns of single-neuron and population firing rates during isometric and movement tasks, narrow tuning curves, non cosine tuning curves, changes of preferred directions during a task, and changes of preferred directions due to different experimental conditions.


Assuntos
Extremidades/fisiologia , Modelos Biológicos , Córtex Motor/citologia , Neurônios/fisiologia , Medula Espinal/fisiologia , Potenciais de Ação/fisiologia , Animais , Fenômenos Biomecânicos , Humanos , Modelos Neurológicos , Movimento/fisiologia , Contração Muscular/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...