Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neural Comput ; 36(3): 437-474, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38363661

RESUMEN

Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.

2.
Nat Neurosci ; 25(3): 345-357, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35260863

RESUMEN

A classic view of the striatum holds that activity in direct and indirect pathways oppositely modulates motor output. Whether this involves direct control of movement, or reflects a cognitive process underlying movement, remains unresolved. Here we find that strong, opponent control of behavior by the two pathways of the dorsomedial striatum depends on the cognitive requirements of a task. Furthermore, a latent state model (a hidden Markov model with generalized linear model observations) reveals that-even within a single task-the contribution of the two pathways to behavior is state dependent. Specifically, the two pathways have large contributions in one of two states associated with a strategy of evidence accumulation, compared to a state associated with a strategy of repeating previous choices. Thus, both the demands imposed by a task, as well as the internal state of mice when performing a task, determine whether dorsomedial striatum pathways provide strong and opponent control of behavior.


Asunto(s)
Cuerpo Estriado , Neostriado , Animales , Conducta Animal , Conducta de Elección , Cuerpo Estriado/metabolismo , Ratones , Movimiento
3.
Nat Neurosci ; 25(2): 201-212, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35132235

RESUMEN

Classical models of perceptual decision-making assume that subjects use a single, consistent strategy to form decisions, or that decision-making strategies evolve slowly over time. Here we present new analyses suggesting that this common view is incorrect. We analyzed data from mouse and human decision-making experiments and found that choice behavior relies on an interplay among multiple interleaved strategies. These strategies, characterized by states in a hidden Markov model, persist for tens to hundreds of trials before switching, and often switch multiple times within a session. The identified decision-making strategies were highly consistent across mice and comprised a single 'engaged' state, in which decisions relied heavily on the sensory stimulus, and several biased states in which errors frequently occurred. These results provide a powerful alternate explanation for 'lapses' often observed in rodent behavioral experiments, and suggest that standard measures of performance mask the presence of major changes in strategy across trials.


Asunto(s)
Conducta de Elección , Toma de Decisiones , Animales , Humanos , Ratones
4.
Nat Methods ; 19(1): 119-128, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34949809

RESUMEN

Due to advances in automated image acquisition and analysis, whole-brain connectomes with 100,000 or more neurons are on the horizon. Proofreading of whole-brain automated reconstructions will require many person-years of effort, due to the huge volumes of data involved. Here we present FlyWire, an online community for proofreading neural circuits in a Drosophila melanogaster brain and explain how its computational and social structures are organized to scale up to whole-brain connectomics. Browser-based three-dimensional interactive segmentation by collaborative editing of a spatially chunked supervoxel graph makes it possible to distribute proofreading to individuals located virtually anywhere in the world. Information in the edit history is programmatically accessible for a variety of uses such as estimating proofreading accuracy or building incentive systems. An open community accelerates proofreading by recruiting more participants and accelerates scientific discovery by requiring information sharing. We demonstrate how FlyWire enables circuit analysis by reconstructing and analyzing the connectome of mechanosensory neurons.


Asunto(s)
Encéfalo/fisiología , Conectoma/métodos , Drosophila melanogaster/fisiología , Imagenología Tridimensional/métodos , Programas Informáticos , Animales , Encéfalo/citología , Encéfalo/diagnóstico por imagen , Gráficos por Computador , Visualización de Datos , Drosophila melanogaster/citología , Neuronas/citología , Neuronas/fisiología
5.
Adv Neural Inf Process Syst ; 33: 3442-3453, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-36177341

RESUMEN

How do animals learn? This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Our method efficiently infers the trial-to-trial changes in an animal's policy, and decomposes those changes into a learning component and a noise component. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy. Whereas the average contribution of the conventional REINFORCE learning rule to the policy update for mice learning the International Brain Laboratory's task was just 30%, we found that adding baseline parameters allowed the learning rule to explain 92% of the animals' policy updates under our model. Intriguingly, the best-fitting learning rates and baseline values indicate that an animal's policy update, at each trial, does not occur in the direction that maximizes expected reward. Understanding how an animal transitions from chance-level to high-accuracy performance when learning a new task not only provides neuroscientists with insight into their animals, but also provides concrete examples of biological learning algorithms to the machine learning community.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...