Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(37): e2303332120, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37669393

RESUMEN

Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time autoregressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks, which can represent linearizations of nonlinear systems. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronization specifically in terms of the contribution of process motif (walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.

2.
PLoS Comput Biol ; 17(4): e1008054, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33872296

RESUMEN

Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.


Asunto(s)
Potenciales de Acción , Entropía , Potenciales Evocados , Neuronas/fisiología , Modelos Neurológicos , Distribución de Poisson
3.
PLoS Comput Biol ; 15(10): e1006957, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31613882

RESUMEN

A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system, via changes in neural gain (in terms of the amplification and non-linearity in stimulus-response transfer function of brain regions). In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain parameters led to a 'critical' transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain parameters would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesos Mentales/fisiología , Cognición/fisiología , Simulación por Computador , Humanos , Imagen por Resonancia Magnética/métodos , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Vías Nerviosas/fisiología , Neuronas/fisiología , Dinámicas no Lineales
4.
Entropy (Basel) ; 22(2)2020 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-33285991

RESUMEN

The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content.

5.
J Neurosci ; 37(34): 8273-8283, 2017 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-28751458

RESUMEN

Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and ß-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (<30 Hz).SIGNIFICANCE STATEMENT Our perception is not only determined by the information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity.


Asunto(s)
Ondas Encefálicas/fisiología , Encéfalo/fisiología , Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Adulto , Femenino , Predicción , Humanos , Magnetoencefalografía/métodos , Masculino , Adulto Joven
6.
Hum Brain Mapp ; 39(8): 3227-3240, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29617056

RESUMEN

The neurophysiological underpinnings of the nonsocial symptoms of autism spectrum disorder (ASD) which include sensory and perceptual atypicalities remain poorly understood. Well-known accounts of less dominant top-down influences and more dominant bottom-up processes compete to explain these characteristics. These accounts have been recently embedded in the popular framework of predictive coding theory. To differentiate between competing accounts, we studied altered information dynamics in ASD by quantifying predictable information in neural signals. Predictable information in neural signals measures the amount of stored information that is used for the next time step of a neural process. Thus, predictable information limits the (prior) information which might be available for other brain areas, for example, to build predictions for upcoming sensory information. We studied predictable information in neural signals based on resting-state magnetoencephalography (MEG) recordings of 19 ASD patients and 19 neurotypical controls aged between 14 and 27 years. Using whole-brain beamformer source analysis, we found reduced predictable information in ASD patients across the whole brain, but in particular in posterior regions of the default mode network. In these regions, epoch-by-epoch predictable information was positively correlated with source power in the alpha and beta frequency range as well as autocorrelation decay time. Predictable information in precuneus and cerebellum was negatively associated with nonsocial symptom severity, indicating a relevance of the analysis of predictable information for clinical research in ASD. Our findings are compatible with the assumption that use or precision of prior knowledge is reduced in ASD patients.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Encéfalo/fisiopatología , Adolescente , Adulto , Mapeo Encefálico , Humanos , Magnetoencefalografía , Masculino , Descanso , Procesamiento de Señales Asistido por Computador , Adulto Joven
7.
Entropy (Basel) ; 20(4)2018 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-33265388

RESUMEN

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

8.
Entropy (Basel) ; 20(11)2018 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-33266550

RESUMEN

Information is often described as a reduction of uncertainty associated with a restriction of possible choices. Despite appearing in Hartley's foundational work on information theory, there is a surprising lack of a formal treatment of this interpretation in terms of exclusions. This paper addresses the gap by providing an explicit characterisation of information in terms of probability mass exclusions. It then demonstrates that different exclusions can yield the same amount of information and discusses the insight this provides about how information is shared amongst random variables-lack of progress in this area is a key barrier preventing us from understanding how information is distributed in complex systems. The paper closes by deriving a decomposition of the mutual information which can distinguish between differing exclusions; this provides surprising insight into the nature of directed information.

9.
Entropy (Basel) ; 20(4)2018 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-33265398

RESUMEN

The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on "Information Decomposition of Target Effects from Multi-Source Interactions" at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.

10.
Brain Cogn ; 112: 25-38, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-26475739

RESUMEN

In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing.


Asunto(s)
Encéfalo/fisiología , Objetivos , Teoría de la Información , Red Nerviosa/fisiología , Humanos , Modelos Neurológicos
11.
Cell Rep ; 43(6): 114359, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38870015

RESUMEN

There is substantial evidence that neuromodulatory systems critically influence brain state dynamics; however, most work has been purely descriptive. Here, we quantify, using data combining local inactivation of the basal forebrain with simultaneous measurement of resting-state fMRI activity in the macaque, the causal role of long-range cholinergic input to the stabilization of brain states in the cerebral cortex. Local inactivation of the nucleus basalis of Meynert (nbM) leads to a decrease in the energy barriers required for an fMRI state transition in cortical ongoing activity. Moreover, the inactivation of particular nbM sub-regions predominantly affects information transfer in cortical regions known to receive direct anatomical projections. We demonstrate these results in a simple neurodynamical model of cholinergic impact on neuronal firing rates and slow hyperpolarizing adaptation currents. We conclude that the cholinergic system plays a critical role in stabilizing macroscale brain state dynamics.


Asunto(s)
Imagen por Resonancia Magnética , Animales , Núcleo Basal de Meynert/fisiología , Núcleo Basal de Meynert/metabolismo , Acetilcolina/metabolismo , Macaca mulatta , Masculino , Neuronas Colinérgicas/fisiología , Neuronas Colinérgicas/metabolismo , Corteza Cerebral/fisiología , Corteza Cerebral/metabolismo , Neuronas/metabolismo , Neuronas/fisiología , Modelos Neurológicos
12.
Phys Rev Lett ; 111(17): 177203, 2013 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-24206517

RESUMEN

There is growing evidence that for a range of dynamical systems featuring complex interactions between large ensembles of interacting elements, mutual information peaks at order-disorder phase transitions. We conjecture that, by contrast, information flow in such systems will generally peak strictly on the disordered side of a phase transition. This conjecture is verified for a ferromagnetic 2D lattice Ising model with Glauber dynamics and a transfer entropy-based measure of systemwide information flow. Implications of the conjecture are considered, in particular, that for a complex dynamical system in the process of transitioning from disordered to ordered dynamics (a mechanism implicated, for example, in financial market crashes and the onset of some types of epileptic seizures); information dynamics may be able to predict an imminent transition.

13.
Nat Comput Sci ; 3(10): 883-893, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38177751

RESUMEN

Scientists have developed hundreds of techniques to measure the interactions between pairs of processes in complex systems, but these computational methods-from contemporaneous correlation coefficients to causal inference methods-define and formulate interactions differently, using distinct quantitative theories that remain largely disconnected. Here we introduce a large assembled library of 237 statistics of pairwise interactions, and assess their behavior on 1,053 multivariate time series from a wide range of real-world and model-generated systems. Our analysis highlights commonalities between disparate mathematical formulations of interactions, providing a unified picture of a rich interdisciplinary literature. Using three real-world case studies, we then show that simultaneously leveraging diverse methods can uncover those most suitable for addressing a given problem, facilitating interpretable understanding of the quantitative formulation of pairwise dependencies that drive successful performance. Our results and accompanying software enable comprehensive analysis of time-series interactions by drawing on decades of diverse methodological contributions.

14.
Nat Commun ; 14(1): 6846, 2023 10 27.
Artículo en Inglés | MEDLINE | ID: mdl-37891167

RESUMEN

The human brain displays a rich repertoire of states that emerge from the microscopic interactions of cortical and subcortical neurons. Difficulties inherent within large-scale simultaneous neuronal recording limit our ability to link biophysical processes at the microscale to emergent macroscopic brain states. Here we introduce a microscale biophysical network model of layer-5 pyramidal neurons that display graded coarse-sampled dynamics matching those observed in macroscale electrophysiological recordings from macaques and humans. We invert our model to identify the neuronal spike and burst dynamics that differentiate unconscious, dreaming, and awake arousal states and provide insights into their functional signatures. We further show that neuromodulatory arousal can mediate different modes of neuronal dynamics around a low-dimensional energy landscape, which in turn changes the response of the model to external stimuli. Our results highlight the promise of multiscale modelling to bridge theories of consciousness across spatiotemporal scales.


Asunto(s)
Encéfalo , Neuronas , Animales , Humanos , Encéfalo/fisiología , Neuronas/fisiología , Estado de Conciencia/fisiología , Células Piramidales , Nivel de Alerta , Macaca
15.
Elife ; 112022 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-35286256

RESUMEN

The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during neural development. This lack of progress has been largely driven by the lack of effective estimators of information processing operations for spiking data. Here, we leverage recent advances in this estimation task in order to quantify the changes in transfer entropy during development. We do so by studying the changes in the intrinsic dynamics of the spontaneous activity of developing dissociated neural cell cultures. We find that the quantity of information flowing across these networks undergoes a dramatic increase across development. Moreover, the spatial structure of these flows exhibits a tendency to lock-in at the point when they arise. We also characterise the flow of information during the crucial periods of population bursts. We find that, during these bursts, nodes tend to undertake specialised computational roles as either transmitters, mediators, or receivers of information, with these roles tending to align with their average spike ordering. Further, we find that these roles are regularly locked-in when the information flows are established. Finally, we compare these results to information flows in a model network developing according to a spike-timing-dependent plasticity learning rule. Similar temporal patterns in the development of information flows were observed in these networks, hinting at the broader generality of these phenomena.


Asunto(s)
Modelos Neurológicos , Plasticidad Neuronal , Potenciales de Acción , Redes Neurales de la Computación , Neuronas
16.
J Comput Neurosci ; 30(1): 85-107, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20799057

RESUMEN

The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.


Asunto(s)
Mapeo Encefálico , Encéfalo/irrigación sanguínea , Encéfalo/fisiología , Teoría de la Información , Modelos Neurológicos , Simulación por Computador , Entropía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento/fisiología , Red Nerviosa/irrigación sanguínea , Red Nerviosa/fisiología , Oxígeno/sangre , Transferencia de Experiencia en Psicología
17.
Netw Neurosci ; 5(2): 373-404, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34189370

RESUMEN

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.

18.
Brain Inform ; 8(1): 26, 2021 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-34859330

RESUMEN

Here, we combine network neuroscience and machine learning to reveal connections between the brain's network structure and the emerging network structure of an artificial neural network. Specifically, we train a shallow, feedforward neural network to classify hand-written digits and then used a combination of systems neuroscience and information-theoretic tools to perform 'virtual brain analytics' on the resultant edge weights and activity patterns of each node. We identify three distinct phases of network reconfiguration across learning, each of which are characterized by unique topological and information-theoretic signatures. Each phase involves aligning the connections of the neural network with patterns of information contained in the input dataset or preceding layers (as relevant). We also observe a process of low-dimensional category separation in the network as a function of learning. Our results offer a systems-level perspective of how artificial neural networks function-in terms of multi-stage reorganization of edge weights and activity patterns to effectively exploit the information content of input data during edge-weight training-while simultaneously enriching our understanding of the methods used by systems neuroscience.

19.
Sci Rep ; 11(1): 13047, 2021 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-34158521

RESUMEN

Neuromorphic systems comprised of self-assembled nanowires exhibit a range of neural-like dynamics arising from the interplay of their synapse-like electrical junctions and their complex network topology. Additionally, various information processing tasks have been demonstrated with neuromorphic nanowire networks. Here, we investigate the dynamics of how these unique systems process information through information-theoretic metrics. In particular, Transfer Entropy (TE) and Active Information Storage (AIS) are employed to investigate dynamical information flow and short-term memory in nanowire networks. In addition to finding that the topologically central parts of networks contribute the most to the information flow, our results also reveal TE and AIS are maximized when the networks transitions from a quiescent to an active state. The performance of neuromorphic networks in memory and learning tasks is demonstrated to be dependent on their internal dynamical states as well as topological structure. Optimal performance is found when these networks are pre-initialised to the transition state where TE and AIS are maximal. Furthermore, an optimal range of information processing resources (i.e. connectivity density) is identified for performance. Overall, our results demonstrate information dynamics is a valuable tool to study and benchmark neuromorphic systems.

20.
Chaos ; 20(3): 037109, 2010 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-20887075

RESUMEN

Distributed computation can be described in terms of the fundamental operations of information storage, transfer, and modification. To describe the dynamics of information in computation, we need to quantify these operations on a local scale in space and time. In this paper we extend previous work regarding the local quantification of information storage and transfer, to explore how information modification can be quantified at each spatiotemporal point in a system. We introduce the separable information, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome. We apply this measure to cellular automata, where it is shown to be the first direct quantitative measure to provide evidence for the long-held conjecture that collisions between emergent particles therein are the dominant information modification events.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA