Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 20(6): e1012047, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38865345

RESUMO

A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.


Assuntos
Teorema de Bayes , Dendritos , Modelos Neurológicos , Plasticidade Neuronal , Sinapses , Dendritos/fisiologia , Animais , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Simulação por Computador , Sinais (Psicologia) , Biologia Computacional , Neurônios/fisiologia , Potenciais de Ação/fisiologia
2.
Neuron ; 112(10): 1531-1552, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38447578

RESUMO

How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.


Assuntos
Encéfalo , Estado de Consciência , Estado de Consciência/fisiologia , Humanos , Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais
3.
Neurosci Biobehav Rev ; 157: 105508, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38097096

RESUMO

Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.


Assuntos
Sonhos , Imaginação , Humanos , Sonhos/fisiologia , Imaginação/fisiologia , Sono , Encéfalo , Sensação
4.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523562

RESUMO

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Assuntos
Modelos Neurológicos , N-Metilaspartato , Aprendizagem/fisiologia , Neurônios/fisiologia , Percepção
5.
J Sleep Res ; 32(4): e13846, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36806335

RESUMO

Slow-wave sleep (SWS) is a fundamental physiological process, and its modulation is of interest for basic science and clinical applications. However, automatised protocols for the suppression of SWS are lacking. We describe the development of a novel protocol for the automated detection (based on the whole head topography of frontal slow waves) and suppression of SWS (through closed-loop modulated randomised pulsed noise), and assessed the feasibility, efficacy and functional relevance compared to sham stimulation in 15 healthy young adults in a repeated-measure sleep laboratory study. Auditory compared to sham stimulation resulted in a highly significant reduction of SWS by 30% without affecting total sleep time. The reduction of SWS was associated with an increase in lighter non-rapid eye movement sleep and a shift of slow-wave activity towards the end of the night, indicative of a homeostatic response and functional relevance. Still, cumulative slow-wave activity across the night was significantly reduced by 23%. Undisturbed sleep led to an evening to morning reduction of wake electroencephalographic theta activity, thought to reflect synaptic downscaling during SWS, while suppression of SWS inhibited this dissipation. We provide evidence for the feasibility, efficacy, and functional relevance of a novel fully automated protocol for SWS suppression based on auditory closed-loop stimulation. Future work is needed to further test for functional relevance and potential clinical applications.


Assuntos
Sono de Ondas Lentas , Adulto Jovem , Humanos , Sono de Ondas Lentas/fisiologia , Estudos de Viabilidade , Sono/fisiologia , Polissonografia , Eletroencefalografia/métodos , Estimulação Acústica/métodos
6.
Elife ; 112022 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-35384841

RESUMO

Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.


Assuntos
Sonhos , Sono de Ondas Lentas , Animais , Sono , Sono REM , Vigília
7.
Elife ; 112022 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-35467527

RESUMO

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.


Assuntos
Neurônios , Sinapses , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia
8.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35324886

RESUMO

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação , Encéfalo , Simulação por Computador , Redes Neurais de Computação
9.
Elife ; 102021 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-34709176

RESUMO

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called 'plasticity rules', is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition ­ that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural ­ and reasonable ­ tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on 'evolutionary algorithms'. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner ­ an example of reinforcement learning. Finally, in the third 'supervised learning' scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers 'learn' will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.


Assuntos
Rede Nervosa , Plasticidade Neuronal , Neurônios/fisiologia , Animais , Humanos , Modelos Neurológicos
10.
Sci Rep ; 11(1): 6795, 2021 03 24.
Artigo em Inglês | MEDLINE | ID: mdl-33762640

RESUMO

Olfactory learning and conditioning in the fruit fly is typically modelled by correlation-based associative synaptic plasticity. It was shown that the conditioning of an odor-evoked response by a shock depends on the connections from Kenyon cells (KC) to mushroom body output neurons (MBONs). Although on the behavioral level conditioning is recognized to be predictive, it remains unclear how MBONs form predictions of aversive or appetitive values (valences) of odors on the circuit level. We present behavioral experiments that are not well explained by associative plasticity between conditioned and unconditioned stimuli, and we suggest two alternative models for how predictions can be formed. In error-driven predictive plasticity, dopaminergic neurons (DANs) represent the error between the predictive odor value and the shock strength. In target-driven predictive plasticity, the DANs represent the target for the predictive MBON activity. Predictive plasticity in KC-to-MBON synapses can also explain trace-conditioning, the valence-dependent sign switch in plasticity, and the observed novelty-familiarity representation. The model offers a framework to dissect MBON circuits and interpret DAN activity during olfactory learning.


Assuntos
Aprendizagem da Esquiva/fisiologia , Drosophila/fisiologia , Olfato/fisiologia , Animais , Neurônios Dopaminérgicos/fisiologia , Modelos Biológicos , Corpos Pedunculados/fisiologia , Plasticidade Neuronal , Processos Estocásticos , Sinapses/fisiologia
11.
Elife ; 102021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33494860

RESUMO

Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.


Assuntos
Potenciais de Ação/fisiologia , Dendritos/fisiologia , Sinapses/fisiologia
12.
Naunyn Schmiedebergs Arch Pharmacol ; 394(1): 127-135, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32894324

RESUMO

Various disturbances of social behavior, such as autism, depression, or posttraumatic stress disorder, have been associated with an altered steroid hormone homeostasis and a dysregulation of the hypothalamus-pituitary-adrenal axis. A link between steroid hormone antagonists and the treatment of stress-related conditions has been suggested. We evaluated the effects of stress induction on social behavior in the three chambers and its potential reversibility upon specific steroid hormone antagonism in mice. C57BL/6 mice were stressed twice daily for 8 days by chronic swim testing. Social behavior was evaluated by measuring, first, the preference for sociability and, second, the preference for social novelty in the three-chamber approach before and after the chronic swim test. The reversibility of behavior upon stress induction was analyzed after applying steroid hormone antagonists targeting glucocorticoids with etomidate, mineralocorticoids with potassium canrenoate, and androgens with cyproterone acetate and metformin. In the chronic swim test, increased floating time from 0.8 ± 0.2 min up to 4.8 ± 0.25 min was detected (p < 0.01). In the three-chamber approach, increased preference for sociability and decreased preference for social novelty was detected pre- versus post-stress induction. These alterations of social behavior were barely affected by etomidate and potassium canrenoate, whereas the two androgen antagonists metformin and cyproterone acetate restored social behavior even beyond baseline conditions. The alteration of social behavior was better reversed by the androgen as compared with the glucocorticoid and mineralocorticoid antagonists. This suggests that social behavior is primarily controlled by androgen rather than by glucocorticoid or mineralocorticoid action. The stress-induced changes in preference for sociability are incompletely explained by steroid hormone action alone. As the best response was related to metformin, an effect via glucose levels might confound the results and should be subject to future research.


Assuntos
Antagonistas de Androgênios/farmacologia , Antagonistas de Receptores de Mineralocorticoides/farmacologia , Receptores de Glucocorticoides/antagonistas & inibidores , Comportamento Social , Estresse Psicológico , Animais , Comportamento Animal/efeitos dos fármacos , Ácido Canrenoico/farmacologia , Acetato de Ciproterona/farmacologia , Etomidato/farmacologia , Feminino , Hormônios/fisiologia , Hipnóticos e Sedativos/farmacologia , Metformina/farmacologia , Camundongos Endogâmicos C57BL
13.
J Neurosci ; 40(46): 8799-8815, 2020 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-33046549

RESUMO

Signal propagation in the dendrites of many neurons, including cortical pyramidal neurons in sensory cortex, is characterized by strong attenuation toward the soma. In contrast, using dual whole-cell recordings from the apical dendrite and soma of layer 5 (L5) pyramidal neurons in the anterior cingulate cortex (ACC) of adult male mice we found good coupling, particularly of slow subthreshold potentials like NMDA spikes or trains of EPSPs from dendrite to soma. Only the fastest EPSPs in the ACC were reduced to a similar degree as in primary somatosensory cortex, revealing differential low-pass filtering capabilities. Furthermore, L5 pyramidal neurons in the ACC did not exhibit dendritic Ca2+ spikes as prominently found in the apical dendrite of S1 (somatosensory cortex) pyramidal neurons. Fitting the experimental data to a NEURON model revealed that the specific distribution of Ileak, Iir, Im , and Ih was sufficient to explain the electrotonic dendritic structure causing a leaky distal dendritic compartment with correspondingly low input resistance and a compact perisomatic region, resulting in a decoupling of distal tuft branches from each other while at the same time efficiently connecting them to the soma. Our results give a biophysically plausible explanation of how a class of prefrontal cortical pyramidal neurons achieve efficient integration of subthreshold distal synaptic inputs compared with the same cell type in sensory cortices.SIGNIFICANCE STATEMENT Understanding cortical computation requires the understanding of its fundamental computational subunits. Layer 5 pyramidal neurons are the main output neurons of the cortex, integrating synaptic inputs across different cortical layers. Their elaborate dendritic tree receives, propagates, and transforms synaptic inputs into action potential output. We found good coupling of slow subthreshold potentials like NMDA spikes or trains of EPSPs from the distal apical dendrite to the soma in pyramidal neurons in the ACC, which was significantly better compared with S1. This suggests that frontal pyramidal neurons use a different integration scheme compared with the same cell type in somatosensory cortex, which has important implications for our understanding of information processing across different parts of the neocortex.


Assuntos
Dendritos/fisiologia , Giro do Cíngulo/fisiologia , Células Piramidais/fisiologia , Córtex Somatossensorial/fisiologia , Potenciais de Ação/fisiologia , Animais , Fenômenos Eletrofisiológicos , Potenciais Pós-Sinápticos Excitadores , Técnicas In Vitro , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Optogenética , Receptores de N-Metil-D-Aspartato/fisiologia
14.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31659335

RESUMO

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Redes Neurais de Computação , Animais , Encéfalo/fisiologia , Humanos
15.
Neural Netw ; 119: 200-213, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31450073

RESUMO

An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functional Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial.


Assuntos
Teorema de Bayes , Encéfalo/fisiologia , Potenciais da Membrana/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Simulação por Computador , Plasticidade Neuronal/fisiologia
16.
Sci Rep ; 8(1): 10651, 2018 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-30006554

RESUMO

Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term synaptic plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby uncover a powerful computational property of the biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources, which enables them to deal with complex sensory data.

17.
Sci Rep ; 8(1): 11272, 2018 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-30050066

RESUMO

Organisms use environmental cues for directed navigation. Understanding the basic logic behind navigational decisions critically depends on the complexity of the nervous system. Due to the comparably simple organization of the nervous system of the fruit fly larva, it stands as a powerful model to study decision-making processes that underlie directed navigation. We have quantitatively measured phototaxis in response to well-defined sensory inputs. Subsequently, we have formulated a statistical stochastic model based on biased Markov chains to characterize the behavioural basis of negative phototaxis. Our experiments show that larvae make navigational decisions depending on two independent physical variables: light intensity and its spatial gradient. Furthermore, our statistical model quantifies how larvae balance two potentially-contradictory factors: minimizing exposure to light intensity and at the same time maximizing their distance to the light source. We find that the response to the light field is manifestly non-linear, and saturates above an intensity threshold. The model has been validated against our experimental biological data yielding insight into the strategy that larvae use to achieve their goal with respect to the navigational cue of light, an important piece of information for future work to study the role of the different neuronal components in larval phototaxis.


Assuntos
Comportamento Animal , Tomada de Decisões , Drosophila/fisiologia , Fototaxia , Animais , Larva/fisiologia , Luz , Modelos Estatísticos , Orientação Espacial
18.
Cogn Sci ; 41(6): 1533-1554, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27859647

RESUMO

Recent research put forward the hypothesis that eye movements are integrated in memory representations and are reactivated when later recalled. However, "looking back to nothing" during recall might be a consequence of spatial memory retrieval. Here, we aimed at distinguishing between the effect of spatial and oculomotor information on perceptual memory. Participants' task was to judge whether a morph looked rather like the first or second previously presented face. Crucially, faces and morphs were presented in a way that the morph reactivated oculomotor and/or spatial information associated with one of the previously encoded faces. Perceptual face memory was largely influenced by these manipulations. We considered a simple computational model with an excellent match (4.3% error) that expresses these biases as a linear combination of recency, saccade, and location. Surprisingly, saccades did not play a role. The results suggest that spatial and temporal rather than oculomotor information biases perceptual face memory.


Assuntos
Cognição/fisiologia , Movimentos Oculares/fisiologia , Reconhecimento Facial/fisiologia , Memória/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Idoso , Atenção/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Estimulação Luminosa , Memória Espacial/fisiologia , Adulto Jovem
19.
PLoS Comput Biol ; 12(6): e1005003, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27341100

RESUMO

Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).


Assuntos
Potenciais de Ação/fisiologia , Biologia Computacional/métodos , Modelos Neurológicos , Neurônios/fisiologia , Animais , Dendritos/fisiologia , Macaca , Plasticidade Neuronal/fisiologia
20.
PLoS Comput Biol ; 12(2): e1004638, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26841235

RESUMO

In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials.


Assuntos
Dendritos/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Biologia Computacional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...