Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
New J Phys ; 26(2): 023006, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38327877

RESUMEN

In interacting dynamical systems, specific local interaction rules for system components give rise to diverse and complex global dynamics. Long dynamical cycles are a key feature of many natural interacting systems, especially in biology. Examples of dynamical cycles range from circadian rhythms regulating sleep to cell cycles regulating reproductive behavior. Despite the crucial role of cycles in nature, the properties of network structure that give rise to cycles still need to be better understood. Here, we use a Boolean interaction network model to study the relationships between network structure and cyclic dynamics. We identify particular structural motifs that support cycles, and other motifs that suppress them. More generally, we show that the presence of dynamical reflection symmetry in the interaction network enhances cyclic behavior. In simulating an artificial evolutionary process, we find that motifs that break reflection symmetry are discarded. We further show that dynamical reflection symmetries are over-represented in Boolean models of natural biological systems. Altogether, our results demonstrate a link between symmetry and functionality for interacting dynamical systems, and they provide evidence for symmetry's causal role in evolving dynamical functionality.

2.
Chaos ; 32(1): 011101, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35105129

RESUMEN

Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood, thereby hindering the development of more advanced functions. Here, we train a 1000-neuron RNN-a reservoir computer (RC)-to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories. Furthermore, we explain the abstraction mechanism with a new theory. By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors as quantified by an extra Lyapunov exponent equal to zero. We propose a theoretical mechanism of this abstraction by combining ideas from differentiable generalized synchronization and feedback dynamics. Our results quantify abstraction in simple neural systems, enabling us to design artificial RNNs for abstraction and leading us toward a neural basis of abstraction.


Asunto(s)
Aprendizaje , Red Nerviosa , Computadores , Retroalimentación , Redes Neurales de la Computación
3.
Chaos ; 30(2): 021101, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32113226

RESUMEN

Whether listening to overlapping conversations in a crowded room or recording the simultaneous electrical activity of millions of neurons, the natural world abounds with sparse measurements of complex overlapping signals that arise from dynamical processes. While tools that separate mixed signals into linear sources have proven necessary and useful, the underlying equational forms of most natural signals are unknown and nonlinear. Hence, there is a need for a framework that is general enough to extract sources without knowledge of their generating equations and flexible enough to accommodate nonlinear, even chaotic, sources. Here, we provide such a framework, where the sources are chaotic trajectories from independently evolving dynamical systems. We consider the mixture signal as the sum of two chaotic trajectories and propose a supervised learning scheme that extracts the chaotic trajectories from their mixture. Specifically, we recruit a complex dynamical system as an intermediate processor that is constantly driven by the mixture. We then obtain the separated chaotic trajectories based on this intermediate system by training the proper output functions. To demonstrate the generalizability of this framework in silico, we employ a tank of water as the intermediate system and show its success in separating two-part mixtures of various chaotic trajectories. Finally, we relate the underlying mechanism of this method to the state-observer problem. This relation provides a quantitative theory that explains the performance of our method, and why separation is difficult when two source signals are trajectories from the same chaotic system.

4.
Chaos ; 27(7): 073115, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28764402

RESUMEN

Synchronization of non-identical oscillators coupled through complex networks is an important example of collective behavior, and it is interesting to ask how the structural organization of network interactions influences this process. Several studies have explored and uncovered optimal topologies for synchronization by making purposeful alterations to a network. On the other hand, the connectivity patterns of many natural systems are often not static, but are rather modulated over time according to their dynamics. However, this co-evolution and the extent to which the dynamics of the individual units can shape the organization of the network itself are less well understood. Here, we study initially randomly connected but locally adaptive networks of Kuramoto oscillators. In particular, the system employs a co-evolutionary rewiring strategy that depends only on the instantaneous, pairwise phase differences of neighboring oscillators, and that conserves the total number of edges, allowing the effects of local reorganization to be isolated. We find that a simple rule-which preserves connections between more out-of-phase oscillators while rewiring connections between more in-phase oscillators-can cause initially disordered networks to organize into more structured topologies that support enhanced synchronization dynamics. We examine how this process unfolds over time, finding a dependence on the intrinsic frequencies of the oscillators, the global coupling, and the network density, in terms of how the adaptive mechanism reorganizes the network and influences the dynamics. Importantly, for large enough coupling and after sufficient adaptation, the resulting networks exhibit interesting characteristics, including degree-frequency and frequency-neighbor frequency correlations. These properties have previously been associated with optimal synchronization or explosive transitions in which the networks were constructed using global information. On the contrary, by considering a time-dependent interplay between structure and dynamics, this work offers a mechanism through which emergent phenomena and organization can arise in complex systems utilizing local rules.

5.
ArXiv ; 2023 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-38076517

RESUMEN

Dynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur-the connectivity between neurons-and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment.

6.
bioRxiv ; 2023 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-37662395

RESUMEN

Network control theory (NCT) is a simple and powerful tool for studying how network topology informs and constrains dynamics. Compared to other structure-function coupling approaches, the strength of NCT lies in its capacity to predict the patterns of external control signals that may alter dynamics in a desired way. We have extensively developed and validated the application of NCT to the human structural connectome. Through these efforts, we have studied (i) how different aspects of connectome topology affect neural dynamics, (ii) whether NCT outputs cohere with empirical data on brain function and stimulation, and (iii) how NCT outputs vary across development and correlate with behavior and mental health symptoms. In this protocol, we introduce a framework for applying NCT to structural connectomes following two main pathways. Our primary pathway focuses on computing the control energy associated with transitioning between specific neural activity states. Our second pathway focuses on computing average controllability, which indexes nodes' general capacity to control dynamics. We also provide recommendations for comparing NCT outputs against null network models. Finally, we support this protocol with a Python-based software package called network control theory for python (nctpy).

7.
PLoS One ; 17(9): e0257580, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36121808

RESUMEN

A fundamental challenge in neuroscience is to uncover the principles governing how the brain interacts with the external environment. However, assumptions about external stimuli fundamentally constrain current computational models. We show in silico that unknown external stimulation can produce error in the estimated linear time-invariant dynamical system. To address these limitations, we propose an approach to retrieve the external (unknown) input parameters and demonstrate that the estimated system parameters during external input quiescence uncover spatiotemporal profiles of external inputs over external stimulation periods more accurately. Finally, we unveil the expected (and unexpected) sensory and task-related extra-cortical input profiles using functional magnetic resonance imaging data acquired from 96 subjects (Human Connectome Project) during the resting-state and task scans. This dynamical systems model of the brain offers information on the structure and dimensionality of the BOLD signal's external drivers and shines a light on the likely external sources contributing to the BOLD signal's non-stationarity. Our findings show the role of exogenous inputs in the BOLD dynamics and highlight the importance of accounting for external inputs to unravel the brain's time-varying functional dynamics.


Asunto(s)
Conectoma , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Humanos , Imagen por Resonancia Magnética/métodos
8.
Phys Rev E ; 106(1-1): 014401, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35974521

RESUMEN

Signal propagation along the structural connectome of the brain induces changes in the patterns of activity. These activity patterns define global brain states and contain information in accordance with their expected probability of occurrence. Being the physical substrate upon which information propagates, the structural connectome, in conjunction with the dynamics, determines the set of possible brain states and constrains the transition between accessible states. Yet, precisely how these structural constraints on state transitions relate to their information content remains unexplored. To address this gap in knowledge, we defined the information content as a function of the activation distribution, where statistically rare values of activation correspond to high information content. With this numerical definition in hand, we studied the spatiotemporal distribution of information content in functional magnetic resonance imaging (fMRI) data from the Human Connectome Project during different tasks, and report four key findings. First, information content strongly depends on cognitive context; its absolute level and spatial distribution depend on the cognitive task. Second, while information content shows similarities to other measures of brain activity, it is distinct from both Neurosynth maps and task contrast maps generated by a general linear model applied to the fMRI data. Third, the brain's structural wiring constrains the cost to control its state, where the cost to transition into high information content states is larger than that to transition into low information content states. Finally, all state transitions-especially those to high information content states-are less costly than expected from random network null models, thereby indicating the brains marked efficiency. Taken together, our findings establish an explanatory link between the information contained in a brain state and the energetic cost of attaining that state, thereby laying important groundwork for our understanding of large-scale cognitive computations.

9.
Sci Adv ; 8(45): eabn2293, 2022 Nov 11.
Artículo en Inglés | MEDLINE | ID: mdl-36351015

RESUMEN

Network control theory is increasingly used to profile the brain's energy landscape via simulations of neural dynamics. This approach estimates the control energy required to simulate the activation of brain circuits based on structural connectome measured using diffusion magnetic resonance imaging, thereby quantifying those circuits' energetic efficiency. The biological basis of control energy, however, remains unknown, hampering its further application. To fill this gap, investigating temporal lobe epilepsy as a lesion model, we show that patients require higher control energy to activate the limbic network than healthy volunteers, especially ipsilateral to the seizure focus. The energetic imbalance between ipsilateral and contralateral temporolimbic regions is tracked by asymmetric patterns of glucose metabolism measured using positron emission tomography, which, in turn, may be selectively explained by asymmetric gray matter loss as evidenced in the hippocampus. Our investigation provides the first theoretical framework unifying gray matter integrity, metabolism, and energetic generation of neural dynamics.

10.
Sci Adv ; 8(50): eadd2185, 2022 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-36516263

RESUMEN

Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.


Asunto(s)
Conectoma , Adolescente , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Neuroimagen , Neuronas , Imagen por Resonancia Magnética/métodos
11.
J Neural Eng ; 17(5): 056045, 2020 11 04.
Artículo en Inglés | MEDLINE | ID: mdl-33036007

RESUMEN

OBJECTIVE: Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH: Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS: In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE: Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Potenciales de Acción , Modelos Neurológicos , Red Nerviosa
12.
Netw Neurosci ; 4(4): 1091-1121, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33195950

RESUMEN

The human brain displays rich communication dynamics that are thought to be particularly well-reflected in its marked community structure. Yet, the precise relationship between community structure in structural brain networks and the communication dynamics that can emerge therefrom is not well understood. In addition to offering insight into the structure-function relationship of networked systems, such an understanding is a critical step toward the ability to manipulate the brain's large-scale dynamical activity in a targeted manner. We investigate the role of community structure in the controllability of structural brain networks. At the region level, we find that certain network measures of community structure are sometimes statistically correlated with measures of linear controllability. However, we then demonstrate that this relationship depends on the distribution of network edge weights. We highlight the complexity of the relationship between community structure and controllability by performing numerical simulations using canonical graph models with varying mesoscale architectures and edge weight distributions. Finally, we demonstrate that weighted subgraph centrality, a measure rooted in the graph spectrum, and which captures higher order graph architecture, is a stronger and more consistent predictor of controllability. Our study contributes to an understanding of how the brain's diverse mesoscale structure supports transient communication dynamics.

13.
Netw Neurosci ; 4(4): 1122-1159, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33195951

RESUMEN

Recent advances in computational models of signal propagation and routing in the human brain have underscored the critical role of white-matter structure. A complementary approach has utilized the framework of network control theory to better understand how white matter constrains the manner in which a region or set of regions can direct or control the activity of other regions. Despite the potential for both of these approaches to enhance our understanding of the role of network structure in brain function, little work has sought to understand the relations between them. Here, we seek to explicitly bridge computational models of communication and principles of network control in a conceptual review of the current literature. By drawing comparisons between communication and control models in terms of the level of abstraction, the dynamical complexity, the dependence on network attributes, and the interplay of multiple spatiotemporal scales, we highlight the convergence of and distinctions between the two frameworks. Based on the understanding of the intertwined nature of communication and control in human brain networks, this work provides an integrative perspective for the field and outlines exciting directions for future work.

14.
J Neural Eng ; 17(2): 026031, 2020 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-31968320

RESUMEN

OBJECTIVE: Predicting how the brain can be driven to specific states by means of internal or external control requires a fundamental understanding of the relationship between neural connectivity and activity. Network control theory is a powerful tool from the physical and engineering sciences that can provide insights regarding that relationship; it formalizes the study of how the dynamics of a complex system can arise from its underlying structure of interconnected units. APPROACH: Given the recent use of network control theory in neuroscience, it is now timely to offer a practical guide to methodological considerations in the controllability of structural brain networks. Here we provide a systematic overview of the framework, examine the impact of modeling choices on frequently studied control metrics, and suggest potentially useful theoretical extensions. We ground our discussions, numerical demonstrations, and theoretical advances in a dataset of high-resolution diffusion imaging with 730 diffusion directions acquired over approximately 1 h of scanning from ten healthy young adults. MAIN RESULTS: Following a didactic introduction of the theory, we probe how a selection of modeling choices affects four common statistics: average controllability, modal controllability, minimum control energy, and optimal control energy. Next, we extend the current state-of-the-art in two ways: first, by developing an alternative measure of structural connectivity that accounts for radial propagation of activity through abutting tissue, and second, by defining a complementary metric quantifying the complexity of the energy landscape of a system. We close with specific modeling recommendations and a discussion of methodological constraints. SIGNIFICANCE: Our hope is that this accessible account will inspire the neuroimaging community to more fully exploit the potential of network control theory in tackling pressing questions in cognitive, developmental, and clinical neuroscience.


Asunto(s)
Encéfalo , Encéfalo/diagnóstico por imagen , Humanos , Adulto Joven
15.
Commun Biol ; 3(1): 261, 2020 05 22.
Artículo en Inglés | MEDLINE | ID: mdl-32444827

RESUMEN

A diverse set of white matter connections supports seamless transitions between cognitive states. However, it remains unclear how these connections guide the temporal progression of large-scale brain activity patterns in different cognitive states. Here, we analyze the brain's trajectories across a set of single time point activity patterns from functional magnetic resonance imaging data acquired during the resting state and an n-back working memory task. We find that specific temporal sequences of brain activity are modulated by cognitive load, associated with age, and related to task performance. Using diffusion-weighted imaging acquired from the same subjects, we apply tools from network control theory to show that linear spread of activity along white matter connections constrains the probabilities of these sequences at rest, while stimulus-driven visual inputs explain the sequences observed during the n-back task. Overall, these results elucidate the structural underpinnings of cognitively and developmentally relevant spatiotemporal brain dynamics.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Imagen por Resonancia Magnética/métodos , Vías Nerviosas , Descanso/fisiología , Sustancia Blanca/química , Adolescente , Adulto , Mapeo Encefálico , Niño , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Sustancia Blanca/fisiología , Adulto Joven
16.
Elife ; 92020 03 27.
Artículo en Inglés | MEDLINE | ID: mdl-32216874

RESUMEN

Executive function develops during adolescence, yet it remains unknown how structural brain networks mature to facilitate activation of the fronto-parietal system, which is critical for executive function. In a sample of 946 human youths (ages 8-23y) who completed diffusion imaging, we capitalized upon recent advances in linear dynamical network control theory to calculate the energetic cost necessary to activate the fronto-parietal system through the control of multiple brain regions given existing structural network topology. We found that the energy required to activate the fronto-parietal system declined with development, and the pattern of regional energetic cost predicts unseen individuals' brain maturity. Finally, energetic requirements of the cingulate cortex were negatively correlated with executive performance, and partially mediated the development of executive performance with age. Our results reveal a mechanism by which structural networks develop during adolescence to reduce the theoretical energetic costs of transitions to activation states necessary for executive function.


Adolescents are known for taking risks, from driving too fast to experimenting with drugs and alcohol. Such behaviors tend to decrease as individuals move into adulthood. Most people in their mid-twenties have greater self-control than they did as teenagers. They are also often better at planning, sustaining attention, and inhibiting impulsive behaviors. These skills, which are known as executive functions, develop over the course of adolescence. Executive functions rely upon a series of brain regions distributed across the frontal lobe and the lobe that sits just behind it, the parietal lobe. Fiber tracts connect these regions to form a fronto-parietal network. These fiber tracts are also referred to as white matter due to the whitish fatty material that surrounds and insulates them. Cui et al. now show that changes in white matter networks have implications for teen behavior. Almost 950 healthy young people aged between 8 and 23 years underwent a type of brain scan called diffusion-weighted imaging that visualizes white matter. The scans revealed that white matter networks in the frontal and parietal lobes mature over adolescence. This makes it easier for individuals to activate their fronto-parietal networks by decreasing the amount of energy required. Cui et al. show that a computer model can predict the maturity of a person's brain based on the energy needed to activate their fronto-parietal networks. These changes help explain why executive functions improve during adolescence. This in turn explains why behaviors such as risk-taking tend to decrease with age. That said, adults with various psychiatric disorders, such as ADHD and psychosis, often show impaired executive functions. In the future, it may be possible to reduce these impairments by applying magnetic fields to the scalp to reduce the activity of specific brain regions. The techniques used in the current study could help reveal which brain regions to target with this approach.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Función Ejecutiva/fisiología , Vías Nerviosas/fisiología , Adolescente , Mapeo Encefálico/métodos , Niño , Imagen de Difusión por Resonancia Magnética/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
17.
Nat Phys ; 14: 91-98, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29422941

RESUMEN

Networked systems display complex patterns of interactions between components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding and harnessing the relationship between network topology and system behavior. Here, we use linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from Drosophila, mouse, and human brains. We use these principles to suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs while remaining robust to perturbations, and to perform clinically accessible targeted manipulation of the brain's control performance by removing single edges in the network. Generally, our results ground the expectation of a control system's behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA