RESUMEN
We describe an architecture for organizing, integrating and sharing neurophysiology data within a single laboratory or across a group of collaborators. It comprises a database linking data files to metadata and electronic laboratory notes; a module collecting data from multiple laboratories into one location; a protocol for searching and sharing data and a module for automatic analyses that populates a website. These modules can be used together or individually, by single laboratories or worldwide collaborations.
Asunto(s)
Laboratorios , Neurofisiología , Bases de Datos FactualesRESUMEN
In standard models of perceptual decision-making, noisy sensory evidence is considered to be the primary source of choice errors and the accumulation of evidence needed to overcome this noise gives rise to speed-accuracy tradeoffs. Here, we investigated how the history of recent choices and their outcomes interact with these processes using a combination of theory and experiment. We found that the speed and accuracy of performance of rats on olfactory decision tasks could be best explained by a Bayesian model that combines reinforcement-based learning with accumulation of uncertain sensory evidence. This model predicted the specific pattern of trial history effects that were found in the data. The results suggest that learning is a critical factor contributing to speed-accuracy tradeoffs in decision-making, and that task history effects are not simply biases but rather the signatures of an optimal learning strategy.
Asunto(s)
Conducta de Elección/fisiología , Toma de Decisiones/fisiología , Aprendizaje/fisiología , Memoria/fisiología , Animales , Teorema de Bayes , Conducta Animal/fisiología , Biología Computacional , Modelos Teóricos , Desempeño Psicomotor/fisiología , Ratas , Tiempo de Reacción , Refuerzo en Psicología , IncertidumbreRESUMEN
Over the last two decades, dopamine and reinforcement learning have been increasingly linked. Using a novel, axiomatic approach, a recent study shows that dopamine meets the necessary and sufficient conditions required by the theory to encode a reward prediction error.