Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 14(1): 6846, 2023 10 27.
Artículo en Inglés | MEDLINE | ID: mdl-37891167

RESUMEN

The human brain displays a rich repertoire of states that emerge from the microscopic interactions of cortical and subcortical neurons. Difficulties inherent within large-scale simultaneous neuronal recording limit our ability to link biophysical processes at the microscale to emergent macroscopic brain states. Here we introduce a microscale biophysical network model of layer-5 pyramidal neurons that display graded coarse-sampled dynamics matching those observed in macroscale electrophysiological recordings from macaques and humans. We invert our model to identify the neuronal spike and burst dynamics that differentiate unconscious, dreaming, and awake arousal states and provide insights into their functional signatures. We further show that neuromodulatory arousal can mediate different modes of neuronal dynamics around a low-dimensional energy landscape, which in turn changes the response of the model to external stimuli. Our results highlight the promise of multiscale modelling to bridge theories of consciousness across spatiotemporal scales.


Asunto(s)
Encéfalo , Neuronas , Animales , Humanos , Encéfalo/fisiología , Neuronas/fisiología , Estado de Conciencia/fisiología , Células Piramidales , Nivel de Alerta , Macaca
2.
Proc Natl Acad Sci U S A ; 120(37): e2303332120, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37669393

RESUMEN

Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time autoregressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks, which can represent linearizations of nonlinear systems. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronization specifically in terms of the contribution of process motif (walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.

3.
Nat Comput Sci ; 3(10): 883-893, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38177751

RESUMEN

Scientists have developed hundreds of techniques to measure the interactions between pairs of processes in complex systems, but these computational methods-from contemporaneous correlation coefficients to causal inference methods-define and formulate interactions differently, using distinct quantitative theories that remain largely disconnected. Here we introduce a large assembled library of 237 statistics of pairwise interactions, and assess their behavior on 1,053 multivariate time series from a wide range of real-world and model-generated systems. Our analysis highlights commonalities between disparate mathematical formulations of interactions, providing a unified picture of a rich interdisciplinary literature. Using three real-world case studies, we then show that simultaneously leveraging diverse methods can uncover those most suitable for addressing a given problem, facilitating interpretable understanding of the quantitative formulation of pairwise dependencies that drive successful performance. Our results and accompanying software enable comprehensive analysis of time-series interactions by drawing on decades of diverse methodological contributions.

4.
Elife ; 112022 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-35286256

RESUMEN

The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during neural development. This lack of progress has been largely driven by the lack of effective estimators of information processing operations for spiking data. Here, we leverage recent advances in this estimation task in order to quantify the changes in transfer entropy during development. We do so by studying the changes in the intrinsic dynamics of the spontaneous activity of developing dissociated neural cell cultures. We find that the quantity of information flowing across these networks undergoes a dramatic increase across development. Moreover, the spatial structure of these flows exhibits a tendency to lock-in at the point when they arise. We also characterise the flow of information during the crucial periods of population bursts. We find that, during these bursts, nodes tend to undertake specialised computational roles as either transmitters, mediators, or receivers of information, with these roles tending to align with their average spike ordering. Further, we find that these roles are regularly locked-in when the information flows are established. Finally, we compare these results to information flows in a model network developing according to a spike-timing-dependent plasticity learning rule. Similar temporal patterns in the development of information flows were observed in these networks, hinting at the broader generality of these phenomena.


Asunto(s)
Modelos Neurológicos , Plasticidad Neuronal , Potenciales de Acción , Redes Neurales de la Computación , Neuronas
5.
Brain Inform ; 8(1): 26, 2021 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-34859330

RESUMEN

Here, we combine network neuroscience and machine learning to reveal connections between the brain's network structure and the emerging network structure of an artificial neural network. Specifically, we train a shallow, feedforward neural network to classify hand-written digits and then used a combination of systems neuroscience and information-theoretic tools to perform 'virtual brain analytics' on the resultant edge weights and activity patterns of each node. We identify three distinct phases of network reconfiguration across learning, each of which are characterized by unique topological and information-theoretic signatures. Each phase involves aligning the connections of the neural network with patterns of information contained in the input dataset or preceding layers (as relevant). We also observe a process of low-dimensional category separation in the network as a function of learning. Our results offer a systems-level perspective of how artificial neural networks function-in terms of multi-stage reorganization of edge weights and activity patterns to effectively exploit the information content of input data during edge-weight training-while simultaneously enriching our understanding of the methods used by systems neuroscience.

6.
Sci Rep ; 11(1): 13047, 2021 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-34158521

RESUMEN

Neuromorphic systems comprised of self-assembled nanowires exhibit a range of neural-like dynamics arising from the interplay of their synapse-like electrical junctions and their complex network topology. Additionally, various information processing tasks have been demonstrated with neuromorphic nanowire networks. Here, we investigate the dynamics of how these unique systems process information through information-theoretic metrics. In particular, Transfer Entropy (TE) and Active Information Storage (AIS) are employed to investigate dynamical information flow and short-term memory in nanowire networks. In addition to finding that the topologically central parts of networks contribute the most to the information flow, our results also reveal TE and AIS are maximized when the networks transitions from a quiescent to an active state. The performance of neuromorphic networks in memory and learning tasks is demonstrated to be dependent on their internal dynamical states as well as topological structure. Optimal performance is found when these networks are pre-initialised to the transition state where TE and AIS are maximal. Furthermore, an optimal range of information processing resources (i.e. connectivity density) is identified for performance. Overall, our results demonstrate information dynamics is a valuable tool to study and benchmark neuromorphic systems.

7.
Netw Neurosci ; 5(2): 373-404, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34189370

RESUMEN

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.

8.
PLoS Comput Biol ; 17(4): e1008054, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33872296

RESUMEN

Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.


Asunto(s)
Potenciales de Acción , Entropía , Potenciales Evocados , Neuronas/fisiología , Modelos Neurológicos , Distribución de Poisson
9.
Entropy (Basel) ; 22(2)2020 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-33285991

RESUMEN

The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content.

10.
Proc Math Phys Eng Sci ; 476(2236): 20190779, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32398937

RESUMEN

Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This relationship is studied using a discrete-time linearly coupled Gaussian model, which allows us to derive the TE for each link from the network topology. It is shown analytically that the dependence on the directed link weight is only a first approximation, valid for weak coupling. More generally, the TE increases with the in-degree of the source and decreases with the in-degree of the target, indicating an asymmetry of information transfer between hubs and low-degree nodes. In addition, the TE is directly proportional to weighted motif counts involving common parents or multiple walks from the source to the target, which are more abundant in networks with a high clustering coefficient than in random networks. Our findings also apply to Granger causality, which is equivalent to TE for Gaussian variables. Moreover, similar empirical results on random Boolean networks suggest that the dependence of the TE on the in-degree extends to nonlinear dynamics.

11.
PLoS Comput Biol ; 15(10): e1006957, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31613882

RESUMEN

A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system, via changes in neural gain (in terms of the amplification and non-linearity in stimulus-response transfer function of brain regions). In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain parameters led to a 'critical' transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain parameters would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesos Mentales/fisiología , Cognición/fisiología , Simulación por Computador , Humanos , Imagen por Resonancia Magnética/métodos , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Vías Nerviosas/fisiología , Neuronas/fisiología , Dinámicas no Lineales
12.
Netw Neurosci ; 3(3): 827-847, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31410382

RESUMEN

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present-as implemented in the IDTxl open-source software-addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelization. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98% on average) for 10,000 time samples. Varying the statistical significance threshold showed a more favorable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and magnetoencephalography experiments.

13.
R Soc Open Sci ; 6(2): 181482, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30891275

RESUMEN

Collectively moving animals often display a high degree of synchronization and cohesive group-level formations, such as elongated schools of fish. These global patterns emerge as the result of localized rules of interactions. However, the exact relationship between speed, polarization, neighbour positioning and group structure has produced conflicting results and is largely limited to modelling approaches. This hinders our ability to understand how information spreads between individuals, which may determine the collective functioning of groups. We tested how speed interacts with polarization and positional composition to produce the elongation observed in moving groups of fish as well as how this impacts information flow between individuals. At the local level, we found that increases in speed led to increases in alignment and shifts from lateral to linear neighbour positioning. At the global level, these increases in linear neighbour positioning resulted in elongation of the group. Furthermore, mean pairwise transfer entropy increased with speed and alignment, implying an adaptive value to forming faster, more polarized and linear groups. Ultimately, this research provides vital insight into the mechanisms underlying the elongation of moving animal groups and highlights the functional significance of cohesive and coordinated movement.

14.
Sci Adv ; 4(10): eaau4029, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30345363

RESUMEN

Complex infrastructural networks provide critical services to cities but can be vulnerable to external stresses, including climatic variability. This vulnerability has also challenged past urban settlements, but its role in cases of historic urban demise has not been precisely documented. We transform archeological data from the medieval Cambodian city of Angkor into a numerical model that allows us to quantify topological damage to critical urban infrastructure resulting from climatic variability. Our model reveals unstable behavior in which extensive and cascading damage to infrastructure occurs in response to flooding within Angkor's urban water management system. The likelihood and extent of the cascading failure abruptly grow with the magnitude of flooding relative to normal flows in the system. Our results support the hypothesis that systemic infrastructural vulnerability, coupled with abrupt climatic variation, contributed to the demise of the city. The factors behind Angkor's demise are analogous to challenges faced by modern urban communities struggling with complex critical infrastructure.

15.
Phys Rev E ; 98(1-1): 012314, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30110808

RESUMEN

The characterization of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modeling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated how to formulate transfer entropy in continuous time, we give a total account of information processing in this setting, incorporating information storage. We find that a convergent rate of predictive capacity, comprising the transfer entropy and active information storage, does not exist, arising through divergent rates of active information storage. We identify that active information storage can be decomposed into two separate quantities that characterize predictive capacity stored in a process: active memory utilization and instantaneous predictive capacity. The latter involves prediction related to path regularity and so solely inherits the divergent properties of the active information storage, while the former permits definitions of pathwise and rate quantities. We formulate measures of memory utilization for jump and neural spiking processes and illustrate measures of information processing in synthetic neural spiking models and coupled Ornstein-Uhlenbeck models. The application to synthetic neural spiking models demonstrates that active memory utilization for point processes consists of discontinuous jump contributions (at spikes) interrupting a continuously varying contribution (relating to waiting times between spikes), complementing the behavior previously demonstrated for transfer entropy in these processes.

16.
Hum Brain Mapp ; 39(8): 3227-3240, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29617056

RESUMEN

The neurophysiological underpinnings of the nonsocial symptoms of autism spectrum disorder (ASD) which include sensory and perceptual atypicalities remain poorly understood. Well-known accounts of less dominant top-down influences and more dominant bottom-up processes compete to explain these characteristics. These accounts have been recently embedded in the popular framework of predictive coding theory. To differentiate between competing accounts, we studied altered information dynamics in ASD by quantifying predictable information in neural signals. Predictable information in neural signals measures the amount of stored information that is used for the next time step of a neural process. Thus, predictable information limits the (prior) information which might be available for other brain areas, for example, to build predictions for upcoming sensory information. We studied predictable information in neural signals based on resting-state magnetoencephalography (MEG) recordings of 19 ASD patients and 19 neurotypical controls aged between 14 and 27 years. Using whole-brain beamformer source analysis, we found reduced predictable information in ASD patients across the whole brain, but in particular in posterior regions of the default mode network. In these regions, epoch-by-epoch predictable information was positively correlated with source power in the alpha and beta frequency range as well as autocorrelation decay time. Predictable information in precuneus and cerebellum was negatively associated with nonsocial symptom severity, indicating a relevance of the analysis of predictable information for clinical research in ASD. Our findings are compatible with the assumption that use or precision of prior knowledge is reduced in ASD patients.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Encéfalo/fisiopatología , Adolescente , Adulto , Mapeo Encefálico , Humanos , Magnetoencefalografía , Masculino , Descanso , Procesamiento de Señales Asistido por Computador , Adulto Joven
17.
Phys Rev E ; 97(1-1): 012120, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-29448440

RESUMEN

We study self-organization of collective motion as a thermodynamic phenomenon in the context of the first law of thermodynamics. It is expected that the coherent ordered motion typically self-organises in the presence of changes in the (generalized) internal energy and of (generalized) work done on, or extracted from, the system. We aim to explicitly quantify changes in these two quantities in a system of simulated self-propelled particles and contrast them with changes in the system's configuration entropy. In doing so, we adapt a thermodynamic formulation of the curvatures of the internal energy and the work, with respect to two parameters that control the particles' alignment. This allows us to systematically investigate the behavior of the system by varying the two control parameters to drive the system across a kinetic phase transition. Our results identify critical regimes and show that during the phase transition, where the configuration entropy of the system decreases, the rates of change of the work and of the internal energy also decrease, while their curvatures diverge. Importantly, the reduction of entropy achieved through expenditure of work is shown to peak at criticality. We relate this both to a thermodynamic efficiency and the significance of the increased order with respect to a computational path. Additionally, this study provides an information-geometric interpretation of the curvature of the internal energy as the difference between two curvatures: the curvature of the free entropy, captured by the Fisher information, and the curvature of the configuration entropy.

18.
Entropy (Basel) ; 20(4)2018 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-33265388

RESUMEN

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

19.
Entropy (Basel) ; 20(4)2018 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-33265398

RESUMEN

The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on "Information Decomposition of Target Effects from Multi-Source Interactions" at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.

20.
Entropy (Basel) ; 20(11)2018 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-33266550

RESUMEN

Information is often described as a reduction of uncertainty associated with a restriction of possible choices. Despite appearing in Hartley's foundational work on information theory, there is a surprising lack of a formal treatment of this interpretation in terms of exclusions. This paper addresses the gap by providing an explicit characterisation of information in terms of probability mass exclusions. It then demonstrates that different exclusions can yield the same amount of information and discusses the insight this provides about how information is shared amongst random variables-lack of progress in this area is a key barrier preventing us from understanding how information is distributed in complex systems. The paper closes by deriving a decomposition of the mutual information which can distinguish between differing exclusions; this provides surprising insight into the nature of directed information.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...