RESUMO
We describe an architecture for organizing, integrating and sharing neurophysiology data within a single laboratory or across a group of collaborators. It comprises a database linking data files to metadata and electronic laboratory notes; a module collecting data from multiple laboratories into one location; a protocol for searching and sharing data and a module for automatic analyses that populates a website. These modules can be used together or individually, by single laboratories or worldwide collaborations.
Assuntos
Laboratórios , Neurofisiologia , Bases de Dados FactuaisRESUMO
In standard models of perceptual decision-making, noisy sensory evidence is considered to be the primary source of choice errors and the accumulation of evidence needed to overcome this noise gives rise to speed-accuracy tradeoffs. Here, we investigated how the history of recent choices and their outcomes interact with these processes using a combination of theory and experiment. We found that the speed and accuracy of performance of rats on olfactory decision tasks could be best explained by a Bayesian model that combines reinforcement-based learning with accumulation of uncertain sensory evidence. This model predicted the specific pattern of trial history effects that were found in the data. The results suggest that learning is a critical factor contributing to speed-accuracy tradeoffs in decision-making, and that task history effects are not simply biases but rather the signatures of an optimal learning strategy.
Assuntos
Comportamento de Escolha/fisiologia , Tomada de Decisões/fisiologia , Aprendizagem/fisiologia , Memória/fisiologia , Animais , Teorema de Bayes , Comportamento Animal/fisiologia , Biologia Computacional , Modelos Teóricos , Desempenho Psicomotor/fisiologia , Ratos , Tempo de Reação , Reforço Psicológico , IncertezaRESUMO
Over the last two decades, dopamine and reinforcement learning have been increasingly linked. Using a novel, axiomatic approach, a recent study shows that dopamine meets the necessary and sufficient conditions required by the theory to encode a reward prediction error.