RESUMEN
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Asunto(s)
Neuronas , Animales , Neuronas/fisiología , Aprendizaje Automático , Redes Neurales de la Computación , Aprendizaje , Región CA1 Hipocampal/fisiología , Región CA1 Hipocampal/citología , RatasRESUMEN
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
RESUMEN
Memories of past events can be recalled long after the event, indicating stability. But new experiences are also integrated into existing memories, indicating plasticity. In the hippocampus, spatial representations are known to remain stable but have also been shown to drift over long periods of time. We hypothesized that experience, more than the passage of time, is the driving force behind representational drift. We compared the within-day stability of place cells' representations in dorsal CA1 of the hippocampus of mice traversing two similar, familiar tracks for different durations. We found that the more time the animals spent actively traversing the environment, the greater the representational drift, regardless of the total elapsed time between visits. Our results suggest that spatial representation is a dynamic process, related to the ongoing experiences within a specific context, and is related to memory update rather than to passive forgetting.