Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuron ; 112(9): 1487-1497.e6, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38447576

RESUMO

Little is understood about how engrams, sparse groups of neurons that store memories, are formed endogenously. Here, we combined calcium imaging, activity tagging, and optogenetics to examine the role of neuronal excitability and pre-existing functional connectivity on the allocation of mouse cornu ammonis area 1 (CA1) hippocampal neurons to an engram ensemble supporting a contextual threat memory. Engram neurons (high activity during recall or TRAP2-tagged during training) were more active than non-engram neurons 3 h (but not 24 h to 5 days) before training. Consistent with this, optogenetically inhibiting scFLARE2-tagged neurons active in homecage 3 h, but not 24 h, before conditioning disrupted memory retrieval, indicating that neurons with higher pre-training excitability were allocated to the engram. We also observed stable pre-configured functionally connected sub-ensembles of neurons whose activity cycled over days. Sub-ensembles that were more active before training were allocated to the engram, and their functional connectivity increased at training. Therefore, both neuronal excitability and pre-configured functional connectivity mediate allocation to an engram ensemble.


Assuntos
Medo , Neurônios , Optogenética , Animais , Camundongos , Neurônios/fisiologia , Neurônios/metabolismo , Medo/fisiologia , Região CA1 Hipocampal/fisiologia , Hipocampo/fisiologia , Masculino , Camundongos Endogâmicos C57BL , Condicionamento Clássico/fisiologia , Memória/fisiologia
2.
Curr Opin Biotechnol ; 86: 103070, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38354452

RESUMO

Protein nanoparticles offer a highly tunable platform for engineering multifunctional drug delivery vehicles that can improve drug efficacy and reduce off-target effects. While many protein nanoparticles have demonstrated the ability to tolerate genetic and posttranslational modifications for drug delivery applications, this review will focus on three protein nanoparticles of increasing size. Each protein nanoparticle possesses distinct properties such as highly tunable stability, capacity for splitting or fusing subunits for modular surface decoration, and well-characterized conformational changes with impressive capacity for large protein cargos. While many of the genetic and posttranslational modifications leverage these protein nanoparticle's properties, the shared techniques highlight engineering approaches that have been generalized across many protein nanoparticle platforms.


Assuntos
Sistemas de Liberação de Medicamentos , Nanopartículas , Sistemas de Liberação de Medicamentos/métodos
3.
J Neurosci ; 44(5)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-37989593

RESUMO

Scientists have long conjectured that the neocortex learns patterns in sensory data to generate top-down predictions of upcoming stimuli. In line with this conjecture, different responses to pattern-matching vs pattern-violating visual stimuli have been observed in both spiking and somatic calcium imaging data. However, it remains unknown whether these pattern-violation signals are different between the distal apical dendrites, which are heavily targeted by top-down signals, and the somata, where bottom-up information is primarily integrated. Furthermore, it is unknown how responses to pattern-violating stimuli evolve over time as an animal gains more experience with them. Here, we address these unanswered questions by analyzing responses of individual somata and dendritic branches of layer 2/3 and layer 5 pyramidal neurons tracked over multiple days in primary visual cortex of awake, behaving female and male mice. We use sequences of Gabor patches with patterns in their orientations to create pattern-matching and pattern-violating stimuli, and two-photon calcium imaging to record neuronal responses. Many neurons in both layers show large differences between their responses to pattern-matching and pattern-violating stimuli. Interestingly, these responses evolve in opposite directions in the somata and distal apical dendrites, with somata becoming less sensitive to pattern-violating stimuli and distal apical dendrites more sensitive. These differences between the somata and distal apical dendrites may be important for hierarchical computation of sensory predictions and learning, since these two compartments tend to receive bottom-up and top-down information, respectively.


Assuntos
Cálcio , Neocórtex , Masculino , Feminino , Camundongos , Animais , Cálcio/fisiologia , Neurônios/fisiologia , Dendritos/fisiologia , Células Piramidais/fisiologia , Neocórtex/fisiologia
4.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38054329

RESUMO

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Assuntos
Idioma , Redes Neurais de Computação , Humanos
5.
Sci Rep ; 13(1): 22335, 2023 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-38102369

RESUMO

Neuroscientists have observed both cells in the brain that fire at specific points in time, known as "time cells", and cells whose activity steadily increases or decreases over time, known as "ramping cells". It is speculated that time and ramping cells support temporal computations in the brain and carry mnemonic information. However, due to the limitations in animal experiments, it is difficult to determine how these cells really contribute to behavior. Here, we show that time cells and ramping cells naturally emerge in the recurrent neural networks of deep reinforcement learning models performing simulated interval timing and working memory tasks, which have learned to estimate expected rewards in the future. We show that these cells do indeed carry information about time and items stored in working memory, but they contribute to behavior in large part by providing a dynamic representation on which policy can be computed. Moreover, the information that they do carry depends on both the task demands and the variables provided to the models. Our results suggest that time cells and ramping cells could contribute to temporal and mnemonic calculations, but the way in which they do so may be complex and unintuitive to human observers.


Assuntos
Aprendizagem , Reforço Psicológico , Animais , Humanos , Memória de Curto Prazo , Encéfalo , Recompensa
6.
ArXiv ; 2023 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-37961743

RESUMO

Our ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both model size and datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings. Our method first tokenizes individual spikes within the dataset to build an efficient representation of neural events that captures the fine temporal structure of neural activity. We then employ cross-attention and a PerceiverIO backbone to further construct a latent tokenization of neural population activities. Utilizing this architecture and training framework, we construct a large-scale multi-session model trained on large datasets from seven nonhuman primates, spanning over 158 different sessions of recording from over 27,373 neural units and over 100 hours of recordings. In a number of different tasks, we demonstrate that our pretrained model can be rapidly adapted to new, unseen sessions with unspecified neuron correspondence, enabling few-shot performance with minimal labels. This work presents a powerful new approach for building deep learning tools to analyze neural data and stakes out a clear path to training at scale.

7.
Sci Data ; 10(1): 287, 2023 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-37198203

RESUMO

The apical dendrites of pyramidal neurons in sensory cortex receive primarily top-down signals from associative and motor regions, while cell bodies and nearby dendrites are heavily targeted by locally recurrent or bottom-up inputs from the sensory periphery. Based on these differences, a number of theories in computational neuroscience postulate a unique role for apical dendrites in learning. However, due to technical challenges in data collection, little data is available for comparing the responses of apical dendrites to cell bodies over multiple days. Here we present a dataset collected through the Allen Institute Mindscope's OpenScope program that addresses this need. This dataset comprises high-quality two-photon calcium imaging from the apical dendrites and the cell bodies of visual cortical pyramidal neurons, acquired over multiple days in awake, behaving mice that were presented with visual stimuli. Many of the cell bodies and dendrite segments were tracked over days, enabling analyses of how their responses change over time. This dataset allows neuroscientists to explore the differences between apical and somatic processing and plasticity.


Assuntos
Células Piramidais , Córtex Visual , Animais , Camundongos , Corpo Celular , Dendritos/fisiologia , Neurônios , Células Piramidais/fisiologia , Córtex Visual/fisiologia
8.
Nat Rev Neurosci ; 24(7): 431-450, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37253949

RESUMO

Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.


Assuntos
Encéfalo , Redes Neurais de Computação , Humanos , Encéfalo/fisiologia
9.
J Physiol ; 601(15): 3141-3149, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37078235

RESUMO

The experimental study of learning and plasticity has always been driven by an implicit question: how can physiological changes be adaptive and improve performance? For example, in Hebbian plasticity only synapses from presynaptic neurons that were active are changed, avoiding useless changes. Similarly, in dopamine-gated learning synapse changes depend on reward or lack thereof and do not change when everything is predictable. Within machine learning we can make the question of which changes are adaptive concrete: performance improves when changes correlate with the gradient of an objective function quantifying performance. This result is general for any system that improves through small changes. As such, physiology has always implicitly been seeking mechanisms that allow the brain to approximate gradients. Coming from this perspective we review the existing literature on plasticity-related mechanisms, and we show how these mechanisms relate to gradient estimation. We argue that gradients are a unifying idea to explain the many facets of neuronal plasticity.


Assuntos
Plasticidade Neuronal , Neurônios , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Dopamina , Sinapses/fisiologia , Encéfalo
10.
Nat Commun ; 14(1): 1597, 2023 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-36949048

RESUMO

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Assuntos
Inteligência Artificial , Neurociências , Animais , Humanos
11.
Proc Natl Acad Sci U S A ; 119(45): e2206704119, 2022 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-36322739

RESUMO

New neurons are continuously generated in the subgranular zone of the dentate gyrus throughout adulthood. These new neurons gradually integrate into hippocampal circuits, forming new naive synapses. Viewed from this perspective, these new neurons may represent a significant source of "wiring" noise in hippocampal networks. In machine learning, such noise injection is commonly used as a regularization technique. Regularization techniques help prevent overfitting training data and allow models to generalize learning to new, unseen data. Using a computational modeling approach, here we ask whether a neurogenesis-like process similarly acts as a regularizer, facilitating generalization in a category learning task. In a convolutional neural network (CNN) trained on the CIFAR-10 object recognition dataset, we modeled neurogenesis as a replacement/turnover mechanism, where weights for a randomly chosen small subset of hidden layer neurons were reinitialized to new values as the model learned to categorize 10 different classes of objects. We found that neurogenesis enhanced generalization on unseen test data compared to networks with no neurogenesis. Moreover, neurogenic networks either outperformed or performed similarly to networks with conventional noise injection (i.e., dropout, weight decay, and neural noise). These results suggest that neurogenesis can enhance generalization in hippocampal learning through noise injection, expanding on the roles that neurogenesis may have in cognition.


Assuntos
Memória , Neurogênese , Memória/fisiologia , Neurogênese/fisiologia , Hipocampo/fisiologia , Neurônios/fisiologia , Sinapses , Giro Denteado/fisiologia
12.
Cell ; 185(15): 2640-2643, 2022 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-35868269

RESUMO

Over the last decade, the artificial intelligence (AI) has undergone a revolution that is poised to transform the economy, society, and science. The pace of progress is staggering, and problems that seemed intractable just a few years ago have now been solved. The intersection between neuroscience and AI is particularly exciting.


Assuntos
Inteligência Artificial , Neurociências , Biologia
13.
Front Comput Neurosci ; 16: 757244, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35399916

RESUMO

Forgetting is a normal process in healthy brains, and evidence suggests that the mammalian brain forgets more than is required based on limitations of mnemonic capacity. Episodic memories, in particular, are liable to be forgotten over time. Researchers have hypothesized that it may be beneficial for decision making to forget episodic memories over time. Reinforcement learning offers a normative framework in which to test such hypotheses. Here, we show that a reinforcement learning agent that uses an episodic memory cache to find rewards in maze environments can forget a large percentage of older memories without any performance impairments, if they utilize mnemonic representations that contain structural information about space. Moreover, we show that some forgetting can actually provide a benefit in performance compared to agents with unbounded memories. Our analyses of the agents show that forgetting reduces the influence of outdated information and states which are not frequently visited on the policies produced by the episodic control system. These results support the hypothesis that some degree of forgetting can be beneficial for decision making, which can help to explain why the brain forgets more than is required by capacity limitations.

14.
Neuroscience ; 489: 200-215, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-34358629

RESUMO

Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.


Assuntos
Inteligência Artificial , Dendritos , Biofísica , Dendritos/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia
16.
eNeuro ; 8(5)2021.
Artigo em Inglês | MEDLINE | ID: mdl-34503967

RESUMO

Spontaneous recognition memory tasks are widely used to assess cognitive function in rodents and have become commonplace in the characterization of rodent models of neurodegenerative, neuropsychiatric and neurodevelopmental disorders. Leveraging an animal's innate preference for novelty, these tasks use object exploration to capture the what, where and when components of recognition memory. Choosing and optimizing objects is a key feature when designing recognition memory tasks. Although the range of objects used in these tasks varies extensively across studies, object features can bias exploration, influence task difficulty and alter brain circuit recruitment. Here, we discuss the advantages of using 3D-printed objects in rodent spontaneous recognition memory tasks. We provide strategies for optimizing their design and usage, and offer a repository of tested, open-source designs for use with commonly used rodent species. The easy accessibility, low-cost, renewability and flexibility of 3D-printed open-source designs make this approach an important step toward improving rigor and reproducibility in rodent spontaneous recognition memory tasks.


Assuntos
Reconhecimento Psicológico , Roedores , Animais , Impressão Tridimensional , Reprodutibilidade dos Testes
17.
Commun Biol ; 4(1): 935, 2021 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-34354206

RESUMO

Neurons can carry information with both the synchrony and rate of their spikes. However, it is unknown whether distinct subtypes of neurons are more sensitive to information carried by synchrony versus rate, or vice versa. Here, we address this question using patterned optical stimulation in slices of somatosensory cortex from mouse lines labelling fast-spiking (FS) and regular-spiking (RS) interneurons. We used optical stimulation in layer 2/3 to encode a 1-bit signal using either the synchrony or rate of activity. We then examined the mutual information between this signal and the interneuron responses. We found that for a synchrony encoding, FS interneurons carried more information in the first five milliseconds, while both interneuron subtypes carried more information than excitatory neurons in later responses. For a rate encoding, we found that RS interneurons carried more information after several milliseconds. These data demonstrate that distinct interneuron subtypes in the neocortex have distinct sensitivities to synchrony versus rate codes.


Assuntos
Interneurônios/fisiologia , Neocórtex/fisiologia , Córtex Somatossensorial/fisiologia , Animais , Camundongos , Camundongos Transgênicos , Optogenética , Técnicas de Patch-Clamp
18.
Patterns (N Y) ; 2(5): 100268, 2021 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-34027504

RESUMO

What is the purpose of dreaming? Many scientists have postulated a role for dreaming in learning, often with the aim of improving generative models. In this issue of Patterns, Erik Hoel proposes a novel hypothesis, namely, that dreaming provides a means to reduce overfitting. This hypothesis is interesting both for neuroscience and for the development of new machine-learning systems.

19.
Nat Neurosci ; 24(7): 1010-1019, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33986551

RESUMO

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well established that it depends on pre- and postsynaptic activity. However, models that rely solely on pre- and postsynaptic activity for synaptic changes have, so far, not been able to account for learning complex tasks that demand credit assignment in hierarchical networks. Here we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then pyramidal neurons higher in a hierarchical circuit can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.


Assuntos
Aprendizado Profundo , Aprendizagem/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Células Piramidais/fisiologia , Animais , Humanos
20.
Nat Commun ; 11(1): 4238, 2020 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-32843633

RESUMO

Recently, deep learning has unlocked unprecedented success in various domains, especially using images, text, and speech. However, deep learning is only beneficial if the data have nonlinear relationships and if they are exploitable at available sample sizes. We systematically profiled the performance of deep, kernel, and linear models as a function of sample size on UKBiobank brain images against established machine learning references. On MNIST and Zalando Fashion, prediction accuracy consistently improves when escalating from linear models to shallow-nonlinear models, and further improves with deep-nonlinear models. In contrast, using structural or functional brain scans, simple linear models perform on par with more complex, highly parameterized models in age/sex prediction across increasing sample sizes. In sum, linear models keep improving as the sample size approaches ~10,000 subjects. Yet, nonlinearities for predicting common phenotypes from typical brain scans remain largely inaccessible to the examined kernel and deep learning methods.


Assuntos
Encéfalo/diagnóstico por imagem , Neuroimagem/métodos , Bancos de Espécimes Biológicos , Aprendizado Profundo , Humanos , Modelos Lineares , Aprendizado de Máquina , Fenótipo , Tamanho da Amostra , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...