Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(9): e1011484, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37768890

RESUMO

The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.


Assuntos
Interneurônios , Aprendizagem , Aprendizagem/fisiologia , Encéfalo , Algoritmos , Sono
2.
J Chem Phys ; 137(1): 014514, 2012 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-22779672

RESUMO

Acoustic properties of the fluorinated copolymer Kel F-800 were determined with Brillouin spectroscopy up to pressures of 85 GPa at 300 K. This research addresses outstanding issues in high-pressure polymer behavior, as to date the acoustic properties and equation of state of any polymer have not been determined above 20 GPa. We observed both longitudinal and transverse modes in all pressure domains, allowing us to calculate the C(11) and C(12) moduli, bulk, shear, and Young's moduli, and the density of Kel F-800 as a function of pressure. We found the behavior of the polymer with respect to all parameters to change drastically with pressure. As a result, we find that the data are best understood when split into two pressure regimes. At low pressures (less than ∼5 GPa), analysis of the room temperature isotherm with a semi-empirical equation of state yielded a zero-pressure bulk modulus K(o) and its derivative K(0) (') of 12.8 ± 0.8 GPa and 9.6 ± 0.7, respectively. The same analysis for the higher pressure data yielded values for K(o) and K(0) (') of 34.9 ± 1.7 GPa and 5.1 ± 0.1, respectively. We discuss this significant difference in behavior with reference to the concept of effective free volume collapse.

3.
Nat Commun ; 13(1): 7972, 2022 12 29.
Artigo em Inglês | MEDLINE | ID: mdl-36581618

RESUMO

Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.


Assuntos
Aprendizagem , Redes Neurais de Computação , Humanos
4.
eNeuro ; 7(4)2020.
Artigo em Inglês | MEDLINE | ID: mdl-32737181

RESUMO

Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding performance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help to advance engineering applications such as brain-machine interfaces. Our code package is available at github.com/kordinglab/neural_decoding.


Assuntos
Interfaces Cérebro-Computador , Córtex Motor , Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação
5.
Prog Neurobiol ; 175: 126-137, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30738835

RESUMO

Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: (1) creating solutions to engineering problems, (2) identifying predictive variables, (3) setting benchmarks for simple models of the brain, and (4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.


Assuntos
Encéfalo , Neurociências/métodos , Aprendizado de Máquina Supervisionado , Animais , Humanos
6.
Front Comput Neurosci ; 12: 56, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30072887

RESUMO

Neuroscience has long focused on finding encoding models that effectively ask "what predicts neural spiking?" and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a model. Here we compared the predictive performance of simple models to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. Of these methods, XGBoost and the ensemble consistently produced more accurate spike rate predictions and were less sensitive to the preprocessing of features. These methods can thus be applied quickly to detect if feature sets relate to neural activity in a manner not captured by simpler methods. Encoding models built with a machine learning approach accurately predict spike rates and can offer meaningful benchmarks for simpler models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA