Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 46: 17-37, 2023 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-37428604

RESUMO

How neurons detect the direction of motion is a prime example of neural computation: Motion vision is found in the visual systems of virtually all sighted animals, it is important for survival, and it requires interesting computations with well-defined linear and nonlinear processing steps-yet the whole process is of moderate complexity. The genetic methods available in the fruit fly Drosophila and the charting of a connectome of its visual system have led to rapid progress and unprecedented detail in our understanding of how neurons compute the direction of motion in this organism. The picture that emerged incorporates not only the identity, morphology, and synaptic connectivity of each neuron involved but also its neurotransmitters, its receptors, and their subcellular localization. Together with the neurons' membrane potential responses to visual stimulation, this information provides the basis for a biophysically realistic model of the circuit that computes the direction of visual motion.


Assuntos
Percepção de Movimento , Animais , Percepção de Movimento/fisiologia , Vias Visuais/fisiologia , Drosophila/fisiologia , Visão Ocular , Neurônios/fisiologia , Estimulação Luminosa
2.
Annu Rev Neurosci ; 46: 233-258, 2023 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-36972611

RESUMO

Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.


Assuntos
Encéfalo , Aprendizagem , Hipocampo , Córtex Pré-Frontal , Simulação por Computador
3.
Annu Rev Neurosci ; 44: 403-424, 2021 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-33863252

RESUMO

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.


Assuntos
Encéfalo , Neurônios , Potenciais de Ação , Modelos Neurológicos
4.
Annu Rev Neurosci ; 43: 249-275, 2020 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-32640928

RESUMO

Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.


Assuntos
Encéfalo/fisiologia , Biologia Computacional , Aprendizado Profundo , Rede Nervosa/fisiologia , Animais , Biologia Computacional/métodos , Humanos , Neurônios/fisiologia , Dinâmica Populacional
5.
Proc Natl Acad Sci U S A ; 121(41): e2302730121, 2024 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-39352933

RESUMO

The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory processing (e.g., sensitivity to input) can be optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient coding. We consider a spike-coding network of leaky integrate-and-fire neurons with synaptic transmission delays. Previously, it was shown that the performance of such networks varies nonmonotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibit some signatures of criticality, namely, scale-free dynamics of the spiking and the presence of crackling noise relation. Our work suggests that two influential, and previously disparate theories of neural processing optimization (efficient coding and criticality) may be intimately related.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Rede Nervosa , Neurônios , Transmissão Sináptica , Neurônios/fisiologia , Rede Nervosa/fisiologia , Transmissão Sináptica/fisiologia , Potenciais de Ação/fisiologia , Encéfalo/fisiologia , Humanos , Animais
6.
Proc Natl Acad Sci U S A ; 121(38): e2409160121, 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39264740

RESUMO

Animals are born with extensive innate behavioral capabilities, which arise from neural circuits encoded in the genome. However, the information capacity of the genome is orders of magnitude smaller than that needed to specify the connectivity of an arbitrary brain circuit, indicating that the rules encoding circuit formation must fit through a "genomic bottleneck" as they pass from one generation to the next. Here, we formulate the problem of innate behavioral capacity in the context of artificial neural networks in terms of lossy compression of the weight matrix. We find that several standard network architectures can be compressed by several orders of magnitude, yielding pretraining performance that can approach that of the fully trained network. Interestingly, for complex but not for simple test problems, the genomic bottleneck algorithm also captures essential features of the circuit, leading to enhanced transfer learning to novel tasks and datasets. Our results suggest that compressing a neural circuit through the genomic bottleneck serves as a regularizer, enabling evolution to select simple circuits that can be readily adapted to important real-world tasks. The genomic bottleneck also suggests how innate priors can complement conventional approaches to learning in designing algorithms for AI.


Assuntos
Algoritmos , Redes Neurais de Computação , Animais , Genômica/métodos , Genoma , Humanos
7.
Proc Natl Acad Sci U S A ; 121(45): e2318837121, 2024 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-39485801

RESUMO

The relationship between neurons' input and spiking output is central to brain computation. Studies in vitro and in anesthetized animals suggest that nonlinearities emerge in cells' input-output (IO; activation) functions as network activity increases, yet how neurons transform inputs in vivo has been unclear. Here, we characterize cortical principal neurons' activation functions in awake mice using two-photon optogenetics. We deliver fixed inputs at the soma while neurons' activity varies with sensory stimuli. We find that responses to fixed optogenetic input are nearly unchanged as neurons are excited, reflecting a linear response regime above neurons' resting point. In contrast, responses are dramatically attenuated by suppression. This attenuation is a powerful means to filter inputs arriving to suppressed cells, privileging other inputs arriving to excited neurons. These results have two major implications. First, somatic neural activation functions in vivo accord with the activation functions used in recent machine learning systems. Second, neurons' IO functions can filter sensory inputs-not only do sensory stimuli change neurons' spiking outputs, but these changes also affect responses to input, attenuating responses to some inputs while leaving others unchanged.


Assuntos
Neurônios , Optogenética , Córtex Visual , Animais , Optogenética/métodos , Córtex Visual/fisiologia , Córtex Visual/citologia , Neurônios/fisiologia , Camundongos , Estimulação Luminosa/métodos , Potenciais de Ação/fisiologia
8.
Proc Natl Acad Sci U S A ; 121(3): e2311885121, 2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38198531

RESUMO

The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.


Assuntos
Interneurônios , Neurônios , Encéfalo , Redes Neurais de Computação , Reprodução
9.
Proc Natl Acad Sci U S A ; 119(46): e2121744119, 2022 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-36343230

RESUMO

The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.


Assuntos
Retina , Células Ganglionares da Retina , Animais , Células Ganglionares da Retina/fisiologia , Estimulação Luminosa/métodos , Retina/fisiologia , Mamíferos
10.
J Comput Neurosci ; 52(1): 39-71, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38381252

RESUMO

The computational resources of a neuromorphic network model introduced earlier are investigated in the context of such hierarchical systems as the mammalian visual cortex. It is argued that a form of ubiquitous spontaneous local convolution, driven by spontaneously arising wave-like activity-which itself promotes local Hebbian modulation-enables logical gate-like neural motifs to form into hierarchical feed-forward structures of the Hubel-Wiesel type. Extra-synaptic effects are shown to play a significant rôle in these processes. The type of logic that emerges is not Boolean, confirming and extending earlier findings on the logic of schizophrenia.


Assuntos
Modelos Neurológicos , Córtex Visual , Animais , Rede Nervosa , Mamíferos
11.
J Comput Neurosci ; 52(3): 223-243, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39083150

RESUMO

The computational resources of a neuromorphic network model introduced earlier were investigated in the first paper of this series. It was argued that a form of ubiquitous spontaneous local convolution enabled logical gate-like neural motifs to form into hierarchical feed-forward structures of the Hubel-Wiesel type. Here we investigate concomitant data-like structures and their dynamic rôle in memory formation, retrieval, and replay. The mechanisms give rise to the need for general inhibitory sculpting, and the simulation of the replay of episodic memories, well known in humans and recently observed in rats. Other consequences include explanations of such findings as the directional flows of neural waves in memory formation and retrieval, visual anomalies and memory deficits in schizophrenia, and the operation of GABA agonist drugs in suppressing episodic memories. We put forward the hypothesis that all neural logical operations and feature extractions are of the convolutional hierarchical type described here and in the earlier paper, and exemplified by the Hubel-Wiesel model of the visual cortex, but that in more general cases the precise geometric layering might be obscured and so far undetected.


Assuntos
Memória Episódica , Modelos Neurológicos , Redes Neurais de Computação , Humanos , Animais , Neurônios/fisiologia , Simulação por Computador , Rede Nervosa/fisiologia , Dinâmica não Linear
12.
Proc Natl Acad Sci U S A ; 118(8)2021 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-33593894

RESUMO

Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.


Assuntos
Modelos Neurológicos , Dinâmica não Linear , Retina/fisiologia , Sinapses/fisiologia , Transmissão Sináptica , Vias Visuais/fisiologia , Humanos
13.
Proc Natl Acad Sci U S A ; 118(18)2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33906943

RESUMO

Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843-1864 (1960)] and [J. Sawada, D. S. Modha, "Synapse: Scalable energy-efficient neurosynaptic computing" in Application of Concurrency to System Design (ACSD) (2013), pp. 14-15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a [Formula: see text]-fold discrepancy between biological and lowest possible values of a neuron's computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).


Assuntos
Metabolismo Energético/fisiologia , Rede Nervosa/metabolismo , Neurônios/metabolismo , Sinapses/metabolismo , Potenciais de Ação/fisiologia , Encéfalo/metabolismo , Encéfalo/fisiologia , Córtex Cerebelar/metabolismo , Córtex Cerebelar/fisiologia , Humanos , Neurônios/fisiologia , Fenômenos Físicos , Sinapses/fisiologia
14.
Phys Biol ; 21(1)2023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-38078366

RESUMO

Neuronal populations in the cerebral cortex engage in probabilistic coding, effectively encoding the state of the surrounding environment with high accuracy and extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron signaling outcomes as a thermodynamic process of non-deterministic computation. A mean field approach is used, with the trial Hamiltonian maximizing available free energy and minimizing the net quantity of entropy, compared with a reference Hamiltonian. Thermodynamic quantities are always conserved during the computation; free energy must be expended to produce information, and free energy is released during information compression, as correlations are identified between the encoding system and its surrounding environment. Due to the relationship between the Gibbs free energy equation and the Nernst equation, any increase in free energy is paired with a local decrease in membrane potential. As a result, this process of thermodynamic computation adjusts the likelihood of each neuron firing an action potential. This model shows that non-deterministic signaling outcomes can be achieved by noisy cortical neurons, through an energy-efficient computational process that involves optimally redistributing a Hamiltonian over some time evolution. Calculations demonstrate that the energy efficiency of the human brain is consistent with this model of non-deterministic computation, with net entropy production far too low to retain the assumptions of a classical system.


Assuntos
Redes Neurais de Computação , Neurônios , Humanos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Potenciais da Membrana , Córtex Cerebral
15.
Cereb Cortex ; 32(19): 4141-4155, 2022 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-35024797

RESUMO

Human decision-making requires the brain to fulfill neural computation of benefit and risk and therewith a selection between options. It remains unclear how value-based neural computation and subsequent brain activity evolve to achieve a final decision and which process is modulated by irrational factors. We adopted a sequential risk-taking task that asked participants to successively decide whether to open a box with potential reward/punishment in an eight-box trial, or not to open. With time-resolved multivariate pattern analyses, we decoded electroencephalography and magnetoencephalography responses to two successive low- and high-risk boxes before open-box action. Referencing the specificity of decoding-accuracy peak to a first-stage processing completion, we set it as the demarcation and dissociated the neural time course of decision-making into valuation and selection stages. The behavioral hierarchical drift diffusion modeling confirmed different information processing in two stages, that is, the valuation stage was related to the drift rate of evidence accumulation, while the selection stage was related to the nondecision time spent in response-producing. We further observed that medial orbitofrontal cortex participated in the valuation stage, while superior frontal gyrus engaged in the selection stage of irrational open-box decisions. Afterward, we revealed that irrational factors influenced decision-making through the selection stage rather than the valuation stage.


Assuntos
Tomada de Decisões , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Mapeamento Encefálico , Tomada de Decisões/fisiologia , Humanos , Recompensa
16.
J Biol Phys ; 48(1): 55-78, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35089468

RESUMO

The original computers were people using algorithms to get mathematical results such as rocket trajectories. After the invention of the digital computer, brains have been widely understood through analogies with computers and now artificial neural networks, which have strengths and drawbacks. We define and examine a new kind of computation better adapted to biological systems, called biological computation, a natural adaptation of mechanistic physical computation. Nervous systems are of course biological computers, and we focus on some edge cases of biological computing, hearts and flytraps. The heart has about the computing power of a slug, and much of its computing happens outside of its forty thousand neurons. The flytrap has about the computing power of a lobster ganglion. This account advances fundamental debates in neuroscience by illustrating ways that classical computability theory can miss complexities of biology. By this reframing of computation, we make way for resolving the disconnect between human and machine learning.


Assuntos
Sarraceniaceae , Algoritmos , Computadores , Humanos , Redes Neurais de Computação , Neurônios/fisiologia
17.
Proc Biol Sci ; 288(1947): 20210276, 2021 03 31.
Artigo em Inglês | MEDLINE | ID: mdl-33757352

RESUMO

Sensorimotor coordination is thought to rely on cerebellar-based internal models for state estimation, but the underlying neural mechanisms and specific contribution of the cerebellar components is unknown. A central aspect of any inferential process is the representation of uncertainty or conversely precision characterizing the ensuing estimates. Here, we discuss the possible contribution of inhibition to the encoding of precision of neural representations in the granular layer of the cerebellar cortex. Within this layer, Golgi cells influence excitatory granule cells, and their action is critical in shaping information transmission downstream to Purkinje cells. In this review, we equate the ensuing excitation-inhibition balance in the granular layer with the outcome of a precision-weighted inferential process, and highlight the physiological characteristics of Golgi cell inhibition that are consistent with such computations.


Assuntos
Cerebelo , Inibição Neural , Córtex Cerebelar , Neurônios , Incerteza
18.
Cereb Cortex ; 30(5): 3228-3239, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-31813989

RESUMO

A major function of sensory processing is to achieve neural representations of objects that are stable across changes in context and perspective. Small changes in exploratory behavior can lead to large changes in signals at the sensory periphery, thus resulting in ambiguous neural representations of objects. Overcoming this ambiguity is a hallmark of human object recognition across sensory modalities. Here, we investigate how the perception of tactile texture remains stable across exploratory movements of the hand, including changes in scanning speed, despite the concomitant changes in afferent responses. To this end, we scanned a wide range of everyday textures across the fingertips of rhesus macaques at multiple speeds and recorded the responses evoked in tactile nerve fibers and somatosensory cortical neurons (from Brodmann areas 3b, 1, and 2). We found that individual cortical neurons exhibit a wider range of speed-sensitivities than do nerve fibers. The resulting representations of speed and texture in cortex are more independent than are their counterparts in the nerve and account for speed-invariant perception of texture. We demonstrate that this separation of speed and texture information is a natural consequence of previously described cortical computations.


Assuntos
Potenciais Somatossensoriais Evocados/fisiologia , Córtex Somatossensorial/fisiologia , Percepção do Tato/fisiologia , Animais , Macaca
19.
Cereb Cortex ; 30(6): 3590-3607, 2020 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-32055848

RESUMO

Auditory cortex (AC) is necessary for the detection of brief gaps in ongoing sounds, but not for the detection of longer gaps or other stimuli such as tones or noise. It remains unclear why this is so, and what is special about brief gaps in particular. Here, we used both optogenetic suppression and conventional lesions to show that the cortical dependence of brief gap detection hinges specifically on gap termination. We then identified a cortico-collicular gap detection circuit that amplifies cortical gap termination responses before projecting to inferior colliculus (IC) to impact behavior. We found that gaps evoked off-responses and on-responses in cortical neurons, which temporally overlapped for brief gaps, but not long gaps. This overlap specifically enhanced cortical responses to brief gaps, whereas IC neurons preferred longer gaps. Optogenetic suppression of AC reduced collicular responses specifically to brief gaps, indicating that under normal conditions, the enhanced cortical representation of brief gaps amplifies collicular gap responses. Together these mechanisms explain how and why AC contributes to the behavioral detection of brief gaps, which are critical cues for speech perception, perceptual grouping, and auditory scene analysis.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Colículos Inferiores/fisiologia , Neurônios/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/citologia , Colículos Inferiores/citologia , Camundongos , Vias Neurais , Optogenética , Detecção de Sinal Psicológico
20.
Proc Natl Acad Sci U S A ; 115(13): 3464-3469, 2018 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-29531035

RESUMO

A hallmark of cortical circuits is their versatility. They can perform multiple fundamental computations such as normalization, memory storage, and rhythm generation. Yet it is far from clear how such versatility can be achieved in a single circuit, given that specialized models are often needed to replicate each computation. Here, we show that the stabilized supralinear network (SSN) model, which was originally proposed for sensory integration phenomena such as contrast invariance, normalization, and surround suppression, can give rise to dynamic cortical features of working memory, persistent activity, and rhythm generation. We study the SSN model analytically and uncover regimes where it can provide a substrate for working memory by supporting two stable steady states. Furthermore, we prove that the SSN model can sustain finite firing rates following input withdrawal and present an exact connectivity condition for such persistent activity. In addition, we show that the SSN model can undergo a supercritical Hopf bifurcation and generate global oscillations. Based on the SSN model, we outline the synaptic and neuronal mechanisms underlying computational versatility of cortical circuits. Our work shows that the SSN is an exactly solvable nonlinear recurrent neural network model that could pave the way for a unified theory of cortical function.


Assuntos
Potenciais de Ação/fisiologia , Córtex Cerebral/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA