Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 14: 1160648, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37138984

RESUMO

Episodic memory has been studied extensively in the past few decades, but so far little is understood about how it drives future behavior. Here we propose that episodic memory can facilitate learning in two fundamentally different modes: retrieval and replay, which is the reinstatement of hippocampal activity patterns during later sleep or awake quiescence. We study their properties by comparing three learning paradigms using computational modeling based on visually-driven reinforcement learning. Firstly, episodic memories are retrieved to learn from single experiences (one-shot learning); secondly, episodic memories are replayed to facilitate learning of statistical regularities (replay learning); and, thirdly, learning occurs online as experiences arise with no access to memories of past experiences (online learning). We found that episodic memory benefits spatial learning in a broad range of conditions, but the performance difference is meaningful only when the task is sufficiently complex and the number of learning trials is limited. Furthermore, the two modes of accessing episodic memory affect spatial learning differently. One-shot learning is typically faster than replay learning, but the latter may reach a better asymptotic performance. In the end, we also investigated the benefits of sequential replay and found that replaying stochastic sequences results in faster learning as compared to random replay when the number of replays is limited. Understanding how episodic memory drives future behavior is an important step toward elucidating the nature of episodic memory.

2.
Elife ; 122023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-36916899

RESUMO

Replay of neuronal sequences in the hippocampus during resting states and sleep play an important role in learning and memory consolidation. Consistent with these functions, replay sequences have been shown to obey current spatial constraints. Nevertheless, replay does not necessarily reflect previous behavior and can construct never-experienced sequences. Here, we propose a stochastic replay mechanism that prioritizes experiences based on three variables: 1. Experience strength, 2. experience similarity, and 3. inhibition of return. Using this prioritized replay mechanism to train reinforcement learning agents leads to far better performance than using random replay. Its performance is close to the state-of-the-art, but computationally intensive, algorithm by Mattar & Daw (2018). Importantly, our model reproduces diverse types of replay because of the stochasticity of the replay mechanism and experience-dependent differences between the three variables. In conclusion, a unified replay mechanism generates diverse replay statistics and is efficient in driving spatial learning.


Assuntos
Memória , Aprendizagem Espacial , Memória/fisiologia , Neurônios/fisiologia , Sono/fisiologia , Hipocampo/fisiologia
3.
Front Neuroinform ; 17: 1134405, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36970657

RESUMO

Reinforcement learning (RL) has become a popular paradigm for modeling animal behavior, analyzing neuronal representations, and studying their emergence during learning. This development has been fueled by advances in understanding the role of RL in both the brain and artificial intelligence. However, while in machine learning a set of tools and standardized benchmarks facilitate the development of new methods and their comparison to existing ones, in neuroscience, the software infrastructure is much more fragmented. Even if sharing theoretical principles, computational studies rarely share software frameworks, thereby impeding the integration or comparison of different results. Machine learning tools are also difficult to port to computational neuroscience since the experimental requirements are usually not well aligned. To address these challenges we introduce CoBeL-RL, a closed-loop simulator of complex behavior and learning based on RL and deep neural networks. It provides a neuroscience-oriented framework for efficiently setting up and running simulations. CoBeL-RL offers a set of virtual environments, e.g., T-maze and Morris water maze, which can be simulated at different levels of abstraction, e.g., a simple gridworld or a 3D environment with complex visual stimuli, and set up using intuitive GUI tools. A range of RL algorithms, e.g., Dyna-Q and deep Q-network algorithms, is provided and can be easily extended. CoBeL-RL provides tools for monitoring and analyzing behavior and unit activity, and allows for fine-grained control of the simulation via interfaces to relevant points in its closed-loop. In summary, CoBeL-RL fills an important gap in the software toolbox of computational neuroscience.

4.
Neuroimage ; 253: 119080, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35276369

RESUMO

The cerebellum is involved in the acquisition and consolidation of learned fear responses. Knowledge about its contribution to extinction learning, however, is sparse. Extinction processes likely involve erasure of memories, but there is ample evidence that at least part of the original memory remains. We asked the question whether memory persists within the cerebellum following extinction training. The renewal effect, that is the reoccurrence of the extinguished fear memory during recall in a context different from the extinction context, constitutes one of the phenomena indicating that memory of extinguished learned fear responses is not fully erased during extinction training. We performed a differential AB-A/B fear conditioning paradigm in a 7-Tesla (7T) MRI system in 31 young and healthy men. On day 1, fear acquisition training was performed in context A and extinction training in context B. On day 2, recall was tested in contexts A and B. As expected, participants learned to predict that the CS+ was followed by an aversive electric shock during fear acquisition training. Skin conductance responses (SCRs) were significantly higher to the CS+ compared to the CS- at the end of acquisition. Differences in SCRs vanished in extinction and reoccurred in the acquisition context during recall indicating renewal. Fitting SCR data, a deep neural network model was trained to predict the correct shock value for a given stimulus and context. Event-related fMRI analysis with model-derived prediction values as parametric modulations showed significant effects on activation of the posterolateral cerebellum (lobules VI and Crus I) during recall. Since the prediction values differ based on stimulus (CS+ and CS-) and context during recall, data provide support that the cerebellum is involved in context-related recall of learned fear associations. Likewise, mean ß values were highest in lobules VI and Crus I bilaterally related to the CS+ in the acquisition context during early recall. A similar pattern was seen in the vermis, but only on a trend level. Thus, part of the original memory likely remains within the cerebellum following extinction training. We found cerebellar activations related to the CS+ and CS- during fear acquisition training which likely reflect associative and non-associative aspects of the task. Cerebellar activations, however, were not significantly different for CS+ and CS-. Since the CS- was never followed by an electric shock, the cerebellum may contribute to associative learning related to the CS, for example as a safety cue.


Assuntos
Extinção Psicológica , Medo , Mapeamento Encefálico , Cerebelo/diagnóstico por imagem , Cerebelo/fisiologia , Extinção Psicológica/fisiologia , Medo/fisiologia , Resposta Galvânica da Pele , Humanos , Imageamento por Ressonância Magnética , Masculino
5.
Sci Rep ; 11(1): 2713, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33526840

RESUMO

The context-dependence of extinction learning has been well studied and requires the hippocampus. However, the underlying neural mechanisms are still poorly understood. Using memory-driven reinforcement learning and deep neural networks, we developed a model that learns to navigate autonomously in biologically realistic virtual reality environments based on raw camera inputs alone. Neither is context represented explicitly in our model, nor is context change signaled. We find that memory-intact agents learn distinct context representations, and develop ABA renewal, whereas memory-impaired agents do not. These findings reproduce the behavior of control and hippocampal animals, respectively. We therefore propose that the role of the hippocampus in the context-dependence of extinction learning might stem from its function in episodic-like memory and not in context-representation per se. We conclude that context-dependence can emerge from raw visual inputs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA