Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(18): e2312992121, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38648479

RESUMO

Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.


Assuntos
Modelos Neurológicos , Neurônios , Neurônios/fisiologia , Teorema de Bayes , Rede Nervosa/fisiologia , Dinâmica não Linear , Humanos , Aprendizagem/fisiologia , Animais , Encéfalo/fisiologia
2.
Nat Rev Neurosci ; 22(7): 407-422, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34050339

RESUMO

In the brain, most synapses are formed on minute protrusions known as dendritic spines. Unlike their artificial intelligence counterparts, spines are not merely tuneable memory elements: they also embody algorithms that implement the brain's ability to learn from experience and cope with new challenges. Importantly, they exhibit structural dynamics that depend on activity, excitatory input and inhibitory input (synaptic plasticity or 'extrinsic' dynamics) and dynamics independent of activity ('intrinsic' dynamics), both of which are subject to neuromodulatory influences and reinforcers such as dopamine. Here we succinctly review extrinsic and intrinsic dynamics, compare these with parallels in machine learning where they exist, describe the importance of intrinsic dynamics for memory management and adaptation, and speculate on how disruption of extrinsic and intrinsic dynamics may give rise to mental disorders. Throughout, we also highlight algorithmic features of spine dynamics that may be relevant to future artificial intelligence developments.


Assuntos
Encéfalo/fisiologia , Espinhas Dendríticas/fisiologia , Transtornos Mentais/fisiopatologia , Modelos Neurológicos , Redes Neurais de Computação , Algoritmos , Animais , Inteligência Artificial , Encéfalo/citologia , Espinhas Dendríticas/ultraestrutura , Dopamina/fisiologia , Humanos , Aprendizado de Máquina , Memória de Curto Prazo/fisiologia , Processos Mentais/fisiologia , Plasticidade Neuronal , Neurotransmissores/fisiologia , Optogenética , Receptores Dopaminérgicos/fisiologia , Recompensa , Especificidade da Espécie , Sinapses/fisiologia
3.
Neural Comput ; 35(1): 38-57, 2022 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-36417587

RESUMO

A deep neural network is a good task solver, but it is difficult to make sense of its operation. People have different ideas about how to interpret its operation. We look at this problem from a new perspective where the interpretation of task solving is synthesized by quantifying how much and what previously unused information is exploited in addition to the information used to solve previous tasks. First, after learning several tasks, the network acquires several information partitions related to each task. We propose that the network then learns the minimal information partition that supplements previously learned information partitions to more accurately represent the input. This extra partition is associated with unconceptualized information that has not been used in previous tasks. We manage to identify what unconceptualized information is used and quantify the amount. To interpret how the network solves a new task, we quantify as meta-information how much information from each partition is extracted. We implement this framework with the variational information bottleneck technique. We test the framework with the MNIST and the CLEVR data set. The framework is shown to be able to compose information partitions and synthesize experience-dependent interpretation in the form of meta-information. This system progressively improves the resolution of interpretation upon new experience by converting a part of the unconceptualized information partition to a task-related partition. It can also provide a visual interpretation by imaging what is the part of previously unconceptualized information that is needed to solve a new task.


Assuntos
Aprendizagem , Redes Neurais de Computação , Humanos
4.
PLoS Comput Biol ; 17(2): e1008700, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33561118

RESUMO

Traveling waves are commonly observed across the brain. While previous studies have suggested the role of traveling waves in learning, the mechanism remains unclear. We adopted a computational approach to investigate the effect of traveling waves on synaptic plasticity. Our results indicate that traveling waves facilitate the learning of poly-synaptic network paths when combined with a reward-dependent local synaptic plasticity rule. We also demonstrate that traveling waves expedite finding the shortest paths and learning nonlinear input/output mapping, such as exclusive or (XOR) function.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Plasticidade Neuronal , Neurônios/fisiologia , Animais , Biologia Computacional , Simulação por Computador , Dopamina/metabolismo , Humanos , Aprendizagem , Memória , Dinâmica não Linear , Transdução de Sinais , Sinapses/fisiologia
5.
Neural Comput ; 33(6): 1433-1468, 2021 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-34496387

RESUMO

For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately-when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our proposed theorem, termed the asymptotic linearization theorem, theoretically guarantees that applying linear PCA to the inputs can reliably extract a subspace spanned by the linear projections from every hidden source as the major components-and thus projecting the inputs onto their major eigenspace can effectively recover a linear transformation of the hidden sources. Then subsequent application of linear ICA can separate all the true independent hidden sources accurately. Zero-element-wise-error nonlinear BSS is asymptotically attained when the source dimensionality is large and the input dimensionality is sufficiently larger than the source dimensionality. Our proposed theorem is validated analytically and numerically. Moreover, the same computation can be performed by using Hebbian-like plasticity rules, implying the biological plausibility of this nonlinear BSS strategy. Our results highlight the utility of linear PCA and ICA for accurately and reliably recovering nonlinearly mixed sources and suggest the importance of employing sensors with sufficient dimensionality to identify true hidden sources of real-world data.

6.
Phys Rev Lett ; 125(2): 028101, 2020 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-32701351

RESUMO

We propose an analytically tractable neural connectivity model with power-law distributed synaptic strengths. When threshold neurons with biologically plausible number of incoming connections are considered, our model features a continuous transition to chaos and can reproduce biologically relevant low activity levels and scale-free avalanches, i.e., bursts of activity with power-law distributions of sizes and lifetimes. In contrast, the Gaussian counterpart exhibits a discontinuous transition to chaos and thus cannot be poised near the edge of chaos. We validate our predictions in simulations of networks of binary as well as leaky integrate-and-fire neurons. Our results suggest that heavy-tailed synaptic distribution may form a weakly informative sparse-connectivity prior that can be useful in biological and artificial adaptive systems.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Sinapses/fisiologia , Animais , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Simulação por Computador , Rede Nervosa/anatomia & histologia , Vias Neurais/anatomia & histologia , Vias Neurais/fisiologia , Neurônios/citologia , Neurônios/fisiologia , Dinâmica não Linear
8.
Proc Natl Acad Sci U S A ; 114(36): 9517-9522, 2017 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-28827362

RESUMO

Spontaneous, synchronous bursting of neural population is a widely observed phenomenon in nervous networks, which is considered important for functions and dysfunctions of the brain. However, how the global synchrony across a large number of neurons emerges from an initially nonbursting network state is not fully understood. In this study, we develop a state-space reconstruction method combined with high-resolution recordings of cultured neurons. This method extracts deterministic signatures of upcoming global bursts in "local" dynamics of individual neurons during nonbursting periods. We find that local information within a single-cell time series can compare with or even outperform the global mean-field activity for predicting future global bursts. Moreover, the intercell variability in the burst predictability is found to reflect the network structure realized in the nonbursting periods. These findings suggest that deterministic local dynamics can predict seemingly stochastic global events in self-organized networks, implying the potential applications of the present methodology to detecting locally concentrated early warnings of spontaneous seizure occurrence in the brain.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Animais , Células Cultivadas , Córtex Cerebral/citologia , Estimulação Elétrica , Ratos Wistar , Razão Sinal-Ruído
9.
PLoS Comput Biol ; 14(1): e1005926, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342146

RESUMO

During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence of neural fluctuations, across the brain, on closed-loop brain/body/environment interactions strongly supporting the idea that brain function cannot be fully understood through open-loop approaches alone.


Assuntos
Fenômenos Fisiológicos Musculoesqueléticos , Fenômenos Fisiológicos do Sistema Nervoso , Neurônios/fisiologia , Roedores/fisiologia , Animais , Animais Geneticamente Modificados , Simulação por Computador , Retroalimentação , Potenciais da Membrana , Modelos Neurológicos , Destreza Motora , Distribuição Normal , Reprodutibilidade dos Testes , Transdução de Sinais , Razão Sinal-Ruído , Tato , Vibrissas/fisiologia , Realidade Virtual , Peixe-Zebra
11.
J Neurophysiol ; 117(1): 4-17, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-27707809

RESUMO

Whisker trimming causes substantial reorganization of neuronal response properties in barrel cortex. However, little is known about experience-dependent rerouting of sensory processing following sensory deprivation. To address this, we performed in vivo intracellular recordings from layers 2/3 (L2/3), layer 4 (L4), layer 5 regular-spiking (L5RS), and L5 intrinsically bursting (L5IB) neurons and measured their multiwhisker receptive field at the level of spiking activity, membrane potential, and synaptic conductance before and after sensory deprivation. We used Chernoff information to quantify the "sensory information" contained in the firing patterns of cells in response to spared and deprived whisker stimulation. In the control condition, information for flanking-row and same-row whiskers decreased in the order L4, L2/3, L5IB, L5RS. However, after whisker-row deprivation, spared flanking-row whisker information was reordered to L4, L5RS, L5IB, L2/3. Sensory information from the trimmed whiskers was reduced and delayed in L2/3 and L5IB neurons, whereas sensory information from spared whiskers was increased and advanced in L4 and L5RS neurons. Sensory information from spared whiskers was increased in L5IB neurons without a latency change. L5RS cells exhibited the largest changes in sensory information content through an atypical plasticity combining a significant decrease in spontaneous activity and an increase in a short-latency excitatory conductance. NEW & NOTEWORTHY: Sensory cortical plasticity is usually quantified by changes in evoked firing rate. In this study we quantified plasticity by changes in sensory detection performance using Chernoff information and receiver operating characteristic analysis. We found that whisker deprivation causes a change in information flow within the cortical layers and that layer 5 regular-spiking cells, despite showing only a small potentiation of short-latency input, show the greatest increase in information content for the spared input partly by decreasing their spontaneous activity.


Assuntos
Vias Aferentes/fisiologia , Neurônios/fisiologia , Córtex Somatossensorial/fisiologia , Vibrissas/inervação , Potenciais de Ação/fisiologia , Animais , Biofísica , Estimulação Elétrica , Lisina/análogos & derivados , Lisina/metabolismo , Masculino , Técnicas de Patch-Clamp , Estimulação Física , Curva ROC , Ratos , Ratos Long-Evans , Tempo de Reação/fisiologia , Privação Sensorial , Córtex Somatossensorial/citologia
12.
Phys Rev Lett ; 119(25): 250601, 2017 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-29303344

RESUMO

In natural foraging, many organisms seem to perform two different types of motile search: directed search (taxis) and random search. The former is observed when the environment provides cues to guide motion towards a target. The latter involves no apparent memory or information processing and can be mathematically modeled by random walks. We show that both types of search can be generated by a common mechanism in which Lévy flights or Lévy walks emerge from a second-order gradient-based search with noisy observations. No explicit switching mechanism is required-instead, continuous transitions between the directed and random motions emerge depending on the Hessian matrix of the cost function. For a wide range of scenarios, the Lévy tail index is α=1, consistent with previous observations in foraging organisms. These results suggest that adopting a second-order optimization method can be a useful strategy to combine efficient features of directed and random search.

13.
PLoS Comput Biol ; 11(11): e1004537, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26584045

RESUMO

Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness.


Assuntos
Encéfalo/fisiologia , Estado de Consciência/fisiologia , Vigília/fisiologia , Algoritmos , Animais , Biologia Computacional , Eletroencefalografia , Macaca
14.
Nat Commun ; 15(1): 647, 2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38245502

RESUMO

The hippocampal subfield CA3 is thought to function as an auto-associative network that stores experiences as memories. Information from these experiences arrives directly from the entorhinal cortex as well as indirectly through the dentate gyrus, which performs sparsification and decorrelation. The computational purpose for these dual input pathways has not been firmly established. We model CA3 as a Hopfield-like network that stores both dense, correlated encodings and sparse, decorrelated encodings. As more memories are stored, the former merge along shared features while the latter remain distinct. We verify our model's prediction in rat CA3 place cells, which exhibit more distinct tuning during theta phases with sparser activity. Finally, we find that neural networks trained in multitask learning benefit from a loss term that promotes both correlated and decorrelated representations. Thus, the complementary encodings we have found in CA3 can provide broad computational advantages for solving complex tasks.


Assuntos
Hipocampo , Células de Lugar , Ratos , Animais , Aprendizagem , Córtex Entorrinal , Redes Neurais de Computação , Giro Denteado
15.
Sci Rep ; 14(1): 657, 2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38182692

RESUMO

There are many modeling works that aim to explain people's behaviors that violate classical economic theories. However, these models often do not take into full account the multi-stage nature of real-life problems and people's tendency in solving complicated problems sequentially. In this work, we propose a descriptive decision-making model for multi-stage problems with perceived post-decision information. In the model, decisions are chosen based on an entity which we call the 'anticipated surprise'. The reference point is determined by the expected value of the possible outcomes, which we assume to be dynamically changing during the mental simulation of a sequence of events. We illustrate how our formalism can help us understand prominent economic paradoxes and gambling behaviors that involve multi-stage or sequential planning. We also discuss how neuroscience findings, like prediction error signals and introspective neuronal replay, as well as psychological theories like affective forecasting, are related to the features in our model. This provides hints for future experiments to investigate the role of these entities in decision-making.

16.
Science ; 385(6716): 1459-1465, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39325885

RESUMO

Sleep is regulated by homeostatic processes, yet the biological basis of sleep pressure that accumulates during wakefulness, triggers sleep, and dissipates during sleep remains elusive. We explored a causal relationship between cellular synaptic strength and electroencephalography delta power indicating macro-level sleep pressure by developing a theoretical framework and a molecular tool to manipulate synaptic strength. The mathematical model predicted that increased synaptic strength promotes the neuronal "down state" and raises the delta power. Our molecular tool (synapse-targeted chemically induced translocation of Kalirin-7, SYNCit-K), which induces dendritic spine enlargement and synaptic potentiation through chemically induced translocation of protein Kalirin-7, demonstrated that synaptic potentiation of excitatory neurons in the prefrontal cortex (PFC) increases nonrapid eye movement sleep amounts and delta power. Thus, synaptic strength of PFC excitatory neurons dictates sleep pressure in mammals.


Assuntos
Fatores de Troca do Nucleotídeo Guanina , Homeostase , Córtex Pré-Frontal , Sono , Sinapses , Animais , Masculino , Camundongos , Ritmo Delta , Espinhas Dendríticas/fisiologia , Fatores de Troca do Nucleotídeo Guanina/metabolismo , Fatores de Troca do Nucleotídeo Guanina/genética , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Sono/fisiologia , Sinapses/fisiologia , Vigília/fisiologia , Engenharia de Proteínas
17.
Curr Opin Neurobiol ; 83: 102799, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37844426

RESUMO

Sleep is considered to play an essential role in memory reorganization. Despite its importance, classical theoretical models did not focus on some sleep characteristics. Here, we review recent theoretical approaches investigating their roles in learning and discuss the possibility that non-rapid eye movement (NREM) sleep selectively consolidates memory, and rapid eye movement (REM) sleep reorganizes the representations of memories. We first review the possibility that slow waves during NREM sleep contribute to memory selection by using sequential firing patterns and the existence of up and down states. Second, we discuss the role of dreaming during REM sleep in developing neuronal representations. We finally discuss how to develop these points further, emphasizing the connections to experimental neuroscience and machine learning.


Assuntos
Sono REM , Sono , Sono/fisiologia , Sono REM/fisiologia
18.
PNAS Nexus ; 2(1): pgac286, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36712943

RESUMO

Slow waves during the non-rapid eye movement (NREM) sleep reflect the alternating up and down states of cortical neurons; global and local slow waves promote memory consolidation and forgetting, respectively. Furthermore, distinct spike-timing-dependent plasticity (STDP) operates in these up and down states. The contribution of different plasticity rules to neural information coding and memory reorganization remains unknown. Here, we show that optimal synaptic plasticity for information maximization in a cortical neuron model provides a unified explanation for these phenomena. The model indicates that the optimal synaptic plasticity is biased toward depression as the baseline firing rate increases. This property explains the distinct STDP observed in the up and down states. Furthermore, it explains how global and local slow waves predominantly potentiate and depress synapses, respectively, if the background firing rate of excitatory neurons declines with the spatial scale of waves as the model predicts. The model provides a unifying account of the role of NREM sleep, bridging neural information coding, synaptic plasticity, and memory reorganization.

19.
Phys Rev E ; 108(5-1): 054410, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38115467

RESUMO

We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.

20.
Neural Comput ; 24(10): 2678-99, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22594828

RESUMO

Many cognitive processes rely on the ability of the brain to hold sequences of events in short-term memory. Recent studies have revealed that such memory can be read out from the transient dynamics of a network of neurons. However, the memory performance of such a network in buffering past information has been rigorously estimated only in networks of linear neurons. When signal gain is kept low, so that neurons operate primarily in the linear part of their response nonlinearity, the memory lifetime is bounded by the square root of the network size. In this work, I demonstrate that it is possible to achieve a memory lifetime almost proportional to the network size, "an extensive memory lifetime," when the nonlinearity of neurons is appropriately used. The analysis of neural activity revealed that nonlinear dynamics prevented the accumulation of noise by partially removing noise in each time step. With this error-correcting mechanism, I demonstrate that a memory lifetime of order N/logN can be achieved.


Assuntos
Memória/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Dinâmica não Linear , Simulação por Computador , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA