Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
Eur J Neurosci ; 59(11): 3093-3116, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38616566

RESUMO

The amygdala (AMY) is widely implicated in fear learning and fear behaviour, but it remains unclear how the many biological components present within AMY interact to achieve these abilities. Building on previous work, we hypothesize that individual AMY nuclei represent different quantities and that fear conditioning arises from error-driven learning on the synapses between AMY nuclei. We present a computational model of AMY that (a) recreates the divisions and connections between AMY nuclei and their constituent pyramidal and inhibitory neurons; (b) accommodates scalable high-dimensional representations of external stimuli; (c) learns to associate complex stimuli with the presence (or absence) of an aversive stimulus; (d) preserves feature information when mapping inputs to salience estimates, such that these estimates generalize to similar stimuli; and (e) induces a diverse profile of neural responses within each nucleus. Our model predicts (1) defensive responses and neural activities in several experimental conditions, (2) the consequence of artificially ablating particular nuclei and (3) the tendency to generalize defensive responses to novel stimuli. We test these predictions by comparing model outputs to neural and behavioural data from animals and humans. Despite the relative simplicity of our model, we find significant overlap between simulated and empirical data, which supports our claim that the model captures many of the neural mechanisms that support fear conditioning. We conclude by comparing our model to other computational models and by characterizing the theoretical relationship between pattern separation and fear generalization in healthy versus anxious individuals.


Assuntos
Tonsila do Cerebelo , Extinção Psicológica , Medo , Generalização Psicológica , Modelos Neurológicos , Medo/fisiologia , Tonsila do Cerebelo/fisiologia , Extinção Psicológica/fisiologia , Humanos , Animais , Generalização Psicológica/fisiologia , Condicionamento Clássico/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia
2.
PLoS Comput Biol ; 18(9): e1010461, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36074765

RESUMO

Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called "oracle-supervised Neural Engineering Framework" (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 - 99% and exponential forgetting with time constants of τ = 2.4 - 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação/fisiologia , Animais , Neurônios/fisiologia , Células Piramidais/fisiologia , Sinapses/fisiologia
3.
Neural Comput ; 33(1): 96-128, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33080158

RESUMO

Nonlinear interactions in the dendritic tree play a key role in neural computation. Nevertheless, modeling frameworks aimed at the construction of large-scale, functional spiking neural networks, such as the Neural Engineering Framework, tend to assume a linear superposition of postsynaptic currents. In this letter, we present a series of extensions to the Neural Engineering Framework that facilitate the construction of networks incorporating Dale's principle and nonlinear conductance-based synapses. We apply these extensions to a two-compartment LIF neuron that can be seen as a simple model of passive dendritic computation. We show that it is possible to incorporate neuron models with input-dependent nonlinearities into the Neural Engineering Framework without compromising high-level function and that nonlinear postsynaptic currents can be systematically exploited to compute a wide variety of multivariate, band-limited functions, including the Euclidean norm, controlled shunting, and nonnegative multiplication. By avoiding an additional source of spike noise, the function approximation accuracy of a single layer of two-compartment LIF neurons is on a par with or even surpasses that of two-layer spiking neural networks up to a certain target function bandwidth.


Assuntos
Potenciais de Ação , Dendritos , Modelos Neurológicos , Redes Neurais de Computação , Dinâmica não Linear , Potenciais de Ação/fisiologia , Dendritos/fisiologia , Humanos
4.
Neural Comput ; 33(8): 2033-2067, 2021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34310679

RESUMO

While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.

5.
Neural Comput ; 31(5): 849-869, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30883282

RESUMO

We present a new binding operation, vector-derived transformation binding (VTB), for use in vector symbolic architectures (VSA). The performance of VTB is compared to circular convolution, used in holographic reduced representations (HRRs), in terms of list and stack encoding capacity. A special focus is given to the possibility of a neural implementation by the means of the Neural Engineering Framework (NEF). While the scaling of required neural resources is slightly worse for VTB, it is found to be on par with circular convolution for list encoding and better for encoding of stacks. Furthermore, VTB influences the vector length less, which also benefits a neural implementation. Consequently, we argue that VTB is an improvement over HRRs for neurally implemented VSAs.


Assuntos
Redes Neurais de Computação
6.
Neural Comput ; 30(3): 569-609, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29220306

RESUMO

Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Neurônios/fisiologia , Sinapses/fisiologia , Potenciais de Ação/fisiologia , Animais , Biomimética , Encéfalo/fisiologia , Simulação por Computador , Redes Neurais de Computação , Dinâmica não Linear , Fatores de Tempo
7.
Proc Biol Sci ; 283(1843)2016 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-27903878

RESUMO

We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.


Assuntos
Braço/fisiologia , Cerebelo/fisiologia , Modelos Neurológicos , Córtex Motor/fisiologia , Humanos , Neurônios/fisiologia , Dinâmica não Linear
8.
J Neurosci ; 34(5): 1892-902, 2014 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-24478368

RESUMO

Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.


Assuntos
Potenciais de Ação/fisiologia , Adaptação Fisiológica/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/citologia , Estimulação Acústica , Animais , Simulação por Computador , Condicionamento Operante , Masculino , Valor Preditivo dos Testes , Análise de Componente Principal , Ratos , Ratos Long-Evans , Tempo de Reação/fisiologia , Recompensa
9.
PLoS Comput Biol ; 10(6): e1003577, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24921249

RESUMO

Visuospatial attention produces myriad effects on the activity and selectivity of cortical neurons. Spiking neuron models capable of reproducing a wide variety of these effects remain elusive. We present a model called the Attentional Routing Circuit (ARC) that provides a mechanistic description of selective attentional processing in cortex. The model is described mathematically and implemented at the level of individual spiking neurons, with the computations for performing selective attentional processing being mapped to specific neuron types and laminar circuitry. The model is used to simulate three studies of attention in macaque, and is shown to quantitatively match several observed forms of attentional modulation. Specifically, ARC demonstrates that with shifts of spatial attention, neurons may exhibit shifting and shrinking of receptive fields; increases in responses without changes in selectivity for non-spatial features (i.e. response gain), and; that the effect on contrast-response functions is better explained as a response-gain effect than as contrast-gain. Unlike past models, ARC embodies a single mechanism that unifies the above forms of attentional modulation, is consistent with a wide array of available data, and makes several specific and quantifiable predictions.


Assuntos
Atenção/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação , Animais , Biorretroalimentação Psicológica , Córtex Cerebral/fisiologia , Biologia Computacional , Macaca , Vias Neurais/fisiologia , Estimulação Luminosa
10.
Exp Brain Res ; 233(3): 751-66, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25430546

RESUMO

Damage to the right parietal cortex often leads to a syndrome known as unilateral neglect in which the patient fails to attend or respond to stimuli in left space. Recent work attempting to rehabilitate the disorder has made use of rightward-shifting prisms that displace visual input further rightward. After a brief period of adaptation to prisms, many of the symptoms of neglect show improvements that can last for hours or longer, depending on the adaptation procedure. Recent work has shown, however, that differential effects of prisms can be observed on actions (which are typically improved) and perceptual biases (which often remain unchanged). Here, we present a computational model capable of explaining some basic symptoms of neglect (line bisection behaviour), the effects of prism adaptation in both healthy controls and neglect patients and the observed dissociation between action and perception following prisms. The results of our simulations support recent contentions that prisms primarily influence behaviours normally thought to be controlled by the dorsal stream.


Assuntos
Adaptação Fisiológica/fisiologia , Modelos Neurológicos , Transtornos da Percepção/reabilitação , Percepção Visual/fisiologia , Atenção , Humanos , Transtornos da Percepção/fisiopatologia , Estimulação Luminosa , Percepção Espacial
11.
Neural Comput ; 26(8): 1600-23, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24877735

RESUMO

Noise and heterogeneity are both known to benefit neural coding. Stochastic resonance describes how noise, in the form of random fluctuations in a neuron's membrane voltage, can improve neural representations of an input signal. Neuronal heterogeneity refers to variation in any one of a number of neuron parameters and is also known to increase the information content of a population. We explore the interaction between noise and heterogeneity and find that their benefits to neural coding are not independent. Specifically, a neuronal population better represents an input signal when either noise or heterogeneity is added, but adding both does not always improve representation further. To explain this phenomenon, we propose that noise and heterogeneity operate using two shared mechanisms: (1) temporally desynchronizing the firing of neurons in the population and (2) linearizing the response of a population to a stimulus. We first characterize the effects of noise and heterogeneity on the information content of populations of either leaky integrate-and-fire or FitzHugh-Nagumo neurons. We then examine how the mechanisms of desynchronization and linearization produce these effects, and find that they work to distribute information equally across all neurons in the population in terms of both signal timing (desynchronization) and signal amplitude (linearization). Without noise or heterogeneity, all neurons encode the same aspects of the input signal; adding noise or heterogeneity allows neurons to encode complementary aspects of the input signal, thereby increasing information content. The simulations detailed in this letter highlight the importance of heterogeneity and noise in population coding, demonstrate their complex interactions in terms of the information content of neurons, and explain these effects in terms of underlying mechanisms.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Simulação por Computador , Teoria da Informação , Modelos Lineares , Processos Estocásticos , Fatores de Tempo
12.
Behav Brain Sci ; 36(3): 223-4, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23663446

RESUMO

The predictive processing framework lacks many of the architectural and implementational details needed to fully investigate or evaluate the ideas it presents. One way to begin to fill in these details is by turning to standard control-theoretic descriptions of these types of systems (e.g., Kalman filters), and by building complex, unified computational models in biologically realistic neural simulations.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Cognição/fisiologia , Ciência Cognitiva/tendências , Percepção/fisiologia , Humanos
13.
Behav Brain Sci ; 36(3): 307-8, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23673054

RESUMO

Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).


Assuntos
Cognição , Modelos Psicológicos , Teoria da Probabilidade , Teoria Quântica , Humanos
14.
Front Neurosci ; 17: 1190515, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37476829

RESUMO

To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM-a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.

15.
Brain Sci ; 13(2)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36831788

RESUMO

The Neural Engineering Framework (Eliasmith & Anderson, 2003) is a long-standing method for implementing high-level algorithms constrained by low-level neurobiological details. In recent years, this method has been expanded to incorporate more biological details and applied to new tasks. This paper brings together these ongoing research strands, presenting them in a common framework. We expand on the NEF's core principles of (a) specifying the desired tuning curves of neurons in different parts of the model, (b) defining the computational relationships between the values represented by the neurons in different parts of the model, and (c) finding the synaptic connection weights that will cause those computations and tuning curves. In particular, we show how to extend this to include complex spatiotemporal tuning curves, and then apply this approach to produce functional computational models of grid cells, time cells, path integration, sparse representations, probabilistic representations, and symbolic representations in the brain.

16.
Biol Cybern ; 104(4-5): 251-62, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21573688

RESUMO

Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, Neural Computation, 16(1):1-38, 2004; Eliasmith and Anderson, Neural engineering: computation, representation and dynamics in neurobiological systems, 2003; Ma et al., Nature Neuroscience, 9(11):1432-1438, 2006; Sahani and Dayan, Neural Computation, 15(10):2255-2279, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.


Assuntos
Neurônios/fisiologia , Probabilidade , Potenciais de Ação , Modelos Teóricos
17.
Psychol Rev ; 128(1): 104-124, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32816508

RESUMO

We present the context-unified encoding (CUE) model, a large-scale spiking neural network model of human memory. It combines and integrates activity-based short-term memory (STM) with weight-based long-term memory. The implementation with spiking neurons ensures biological plausibility and allows for predictions on the neural level. At the same time, the model produces behavioral outputs that have been matched to human data from serial and free recall experiments. In particular, well-known results such as primacy, recency, transposition error gradients, and forward recall bias have been reproduced with good quantitative matches. Additionally, the model accounts for the Hebb repetition effect. The CUE model combines and extends the ordinal serial encoding model, a spiking neuron model of STM, and the temporal context model, a mathematical memory model matching free recall data. To implement the modification of the required association matrices, a novel learning rule, the association matrix learning rule, is derived that allows for one-shot learning without catastrophic forgetting. Its biological plausibility is discussed and it is shown that it accounts for changes in neural firing observed in human recordings from an association learning experiment. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Potenciais de Ação , Memória de Longo Prazo , Memória de Curto Prazo , Modelos Neurológicos , Neurônios , Humanos , Rememoração Mental , Rede Nervosa
18.
Top Cogn Sci ; 13(3): 515-533, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34146453

RESUMO

Neurophysiology and neuroanatomy constrain the set of possible computations that can be performed in a brain circuit. While detailed data on brain microcircuits is sometimes available, cognitive modelers are seldom in a position to take these constraints into account. One reason for this is the intrinsic complexity of accounting for biological mechanisms when describing cognitive function. In this paper, we present multiple extensions to the neural engineering framework (NEF), which simplify the integration of low-level constraints such as Dale's principle and spatially constrained connectivity into high-level, functional models. We focus on a model of eyeblink conditioning in the cerebellum, and, in particular, on systematically constructing temporal representations in the recurrent granule-Golgi microcircuit. We analyze how biological constraints impact these representations and demonstrate that our overall model is capable of reproducing key properties of eyeblink conditioning. Furthermore, since our techniques facilitate variation of neurophysiological parameters, we gain insights into why certain neurophysiological parameters may be as observed in nature. While eyeblink conditioning is a somewhat primitive form of learning, we argue that the same methods apply for more cognitive models as well. We implemented our extensions to the NEF in an open-source software library named "NengoBio" and hope that this work inspires similar attempts to bridge low-level biological detail and high-level function.


Assuntos
Piscadela , Cerebelo , Cognição , Humanos , Aprendizagem , Rede Nervosa
19.
Neural Comput ; 22(3): 621-59, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19922294

RESUMO

Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.


Assuntos
Redes Neurais de Computação , Percepção do Tempo , Potenciais de Ação , Algoritmos , Simulação por Computador , Discriminação Psicológica/fisiologia , Humanos , Potenciais da Membrana/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Fatores de Tempo , Percepção do Tempo/fisiologia
20.
Stud Hist Philos Sci ; 41(3): 313-20, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21466123

RESUMO

I argue that of the four kinds of quantitative description relevant for understanding brain function, a control theoretic approach is most appealing. This argument proceeds by comparing computational, dynamical, statistical and control theoretic approaches, and identifying criteria for a good description of brain function. These criteria include providing useful decompositions, simple state mappings, and the ability to account for variability. The criteria are justified by their importance in providing unified accounts of multi-level mechanisms that support intervention. Evaluation of the four kinds of description with respect to these criteria supports the claim that control theoretic characterizations of brain function are the kind of quantitative description we ought to provide.


Assuntos
Encéfalo/fisiologia , Processos Mentais/fisiologia , Humanos , Modelos Neurológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA