Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(37): e2303332120, 2023 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-37669393

RESUMO

Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time autoregressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks, which can represent linearizations of nonlinear systems. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronization specifically in terms of the contribution of process motif (walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.

2.
Proc Biol Sci ; 289(1969): 20212361, 2022 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-35193400

RESUMO

Antarctic krill swarms are one of the largest known animal aggregations, and yet, despite being the keystone species of the Southern Ocean, little is known about how swarms are formed and maintained. Understanding the local interactions between individuals that provide the basis for these swarms is fundamental to knowing how swarms arise in nature, and what potential factors might lead to their breakdown. Here, we analysed the trajectories of captive, wild-caught krill in 3D to determine individual-level interaction rules and quantify patterns of information flow. Our results demonstrate that krill align with near neighbours and that they regulate both their direction and speed relative to the positions of groupmates. These results suggest that social factors are vital to the formation and maintenance of swarms. Furthermore, krill operate a novel form of collective organization, with measures of information flow and individual movement adjustments expressed most strongly in the vertical dimension, a finding not seen in other swarming species. This research represents a vital step in understanding the fundamentally important swarming behaviour of krill.


Assuntos
Euphausiacea , Animais , Regiões Antárticas , Euphausiacea/fisiologia
3.
PLoS Comput Biol ; 17(4): e1008054, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33872296

RESUMO

Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.


Assuntos
Potenciais de Ação , Entropia , Potenciais Evocados , Neurônios/fisiologia , Modelos Neurológicos , Distribuição de Poisson
4.
PLoS Comput Biol ; 15(10): e1006957, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31613882

RESUMO

A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system, via changes in neural gain (in terms of the amplification and non-linearity in stimulus-response transfer function of brain regions). In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain parameters led to a 'critical' transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain parameters would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Processos Mentais/fisiologia , Cognição/fisiologia , Simulação por Computador , Humanos , Imageamento por Ressonância Magnética/métodos , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Vias Neurais/fisiologia , Neurônios/fisiologia , Dinâmica não Linear
5.
Entropy (Basel) ; 22(2)2020 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-33285991

RESUMO

The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content.

6.
J Neurosci ; 37(34): 8273-8283, 2017 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-28751458

RESUMO

Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and ß-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (<30 Hz).SIGNIFICANCE STATEMENT Our perception is not only determined by the information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity.


Assuntos
Ondas Encefálicas/fisiologia , Encéfalo/fisiologia , Reconhecimento Facial/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Adulto , Feminino , Previsões , Humanos , Magnetoencefalografia/métodos , Masculino , Adulto Jovem
7.
Hum Brain Mapp ; 39(8): 3227-3240, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29617056

RESUMO

The neurophysiological underpinnings of the nonsocial symptoms of autism spectrum disorder (ASD) which include sensory and perceptual atypicalities remain poorly understood. Well-known accounts of less dominant top-down influences and more dominant bottom-up processes compete to explain these characteristics. These accounts have been recently embedded in the popular framework of predictive coding theory. To differentiate between competing accounts, we studied altered information dynamics in ASD by quantifying predictable information in neural signals. Predictable information in neural signals measures the amount of stored information that is used for the next time step of a neural process. Thus, predictable information limits the (prior) information which might be available for other brain areas, for example, to build predictions for upcoming sensory information. We studied predictable information in neural signals based on resting-state magnetoencephalography (MEG) recordings of 19 ASD patients and 19 neurotypical controls aged between 14 and 27 years. Using whole-brain beamformer source analysis, we found reduced predictable information in ASD patients across the whole brain, but in particular in posterior regions of the default mode network. In these regions, epoch-by-epoch predictable information was positively correlated with source power in the alpha and beta frequency range as well as autocorrelation decay time. Predictable information in precuneus and cerebellum was negatively associated with nonsocial symptom severity, indicating a relevance of the analysis of predictable information for clinical research in ASD. Our findings are compatible with the assumption that use or precision of prior knowledge is reduced in ASD patients.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Encéfalo/fisiopatologia , Adolescente , Adulto , Mapeamento Encefálico , Humanos , Magnetoencefalografia , Masculino , Descanso , Processamento de Sinais Assistido por Computador , Adulto Jovem
8.
Entropy (Basel) ; 20(4)2018 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-33265388

RESUMO

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

9.
Entropy (Basel) ; 20(11)2018 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-33266550

RESUMO

Information is often described as a reduction of uncertainty associated with a restriction of possible choices. Despite appearing in Hartley's foundational work on information theory, there is a surprising lack of a formal treatment of this interpretation in terms of exclusions. This paper addresses the gap by providing an explicit characterisation of information in terms of probability mass exclusions. It then demonstrates that different exclusions can yield the same amount of information and discusses the insight this provides about how information is shared amongst random variables-lack of progress in this area is a key barrier preventing us from understanding how information is distributed in complex systems. The paper closes by deriving a decomposition of the mutual information which can distinguish between differing exclusions; this provides surprising insight into the nature of directed information.

10.
Entropy (Basel) ; 20(4)2018 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-33265398

RESUMO

The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on "Information Decomposition of Target Effects from Multi-Source Interactions" at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.

11.
Brain Cogn ; 112: 25-38, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-26475739

RESUMO

In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing.


Assuntos
Encéfalo/fisiologia , Objetivos , Teoria da Informação , Rede Nervosa/fisiologia , Humanos , Modelos Neurológicos
12.
Cell Rep ; 43(6): 114359, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38870015

RESUMO

There is substantial evidence that neuromodulatory systems critically influence brain state dynamics; however, most work has been purely descriptive. Here, we quantify, using data combining local inactivation of the basal forebrain with simultaneous measurement of resting-state fMRI activity in the macaque, the causal role of long-range cholinergic input to the stabilization of brain states in the cerebral cortex. Local inactivation of the nucleus basalis of Meynert (nbM) leads to a decrease in the energy barriers required for an fMRI state transition in cortical ongoing activity. Moreover, the inactivation of particular nbM sub-regions predominantly affects information transfer in cortical regions known to receive direct anatomical projections. We demonstrate these results in a simple neurodynamical model of cholinergic impact on neuronal firing rates and slow hyperpolarizing adaptation currents. We conclude that the cholinergic system plays a critical role in stabilizing macroscale brain state dynamics.


Assuntos
Imageamento por Ressonância Magnética , Animais , Núcleo Basal de Meynert/fisiologia , Núcleo Basal de Meynert/metabolismo , Acetilcolina/metabolismo , Macaca mulatta , Masculino , Neurônios Colinérgicos/fisiologia , Neurônios Colinérgicos/metabolismo , Córtex Cerebral/fisiologia , Córtex Cerebral/metabolismo , Neurônios/metabolismo , Neurônios/fisiologia , Modelos Neurológicos
13.
Phys Rev Lett ; 111(17): 177203, 2013 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-24206517

RESUMO

There is growing evidence that for a range of dynamical systems featuring complex interactions between large ensembles of interacting elements, mutual information peaks at order-disorder phase transitions. We conjecture that, by contrast, information flow in such systems will generally peak strictly on the disordered side of a phase transition. This conjecture is verified for a ferromagnetic 2D lattice Ising model with Glauber dynamics and a transfer entropy-based measure of systemwide information flow. Implications of the conjecture are considered, in particular, that for a complex dynamical system in the process of transitioning from disordered to ordered dynamics (a mechanism implicated, for example, in financial market crashes and the onset of some types of epileptic seizures); information dynamics may be able to predict an imminent transition.

14.
Nat Commun ; 14(1): 6697, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37914696

RESUMO

Nanowire Networks (NWNs) belong to an emerging class of neuromorphic systems that exploit the unique physical properties of nanostructured materials. In addition to their neural network-like physical structure, NWNs also exhibit resistive memory switching in response to electrical inputs due to synapse-like changes in conductance at nanowire-nanowire cross-point junctions. Previous studies have demonstrated how the neuromorphic dynamics generated by NWNs can be harnessed for temporal learning tasks. This study extends these findings further by demonstrating online learning from spatiotemporal dynamical features using image classification and sequence memory recall tasks implemented on an NWN device. Applied to the MNIST handwritten digit classification task, online dynamical learning with the NWN device achieves an overall accuracy of 93.4%. Additionally, we find a correlation between the classification accuracy of individual digit classes and mutual information. The sequence memory task reveals how memory patterns embedded in the dynamical features enable online learning and recall of a spatiotemporal sequence pattern. Overall, these results provide proof-of-concept of online learning from spatiotemporal dynamics using NWNs and further elucidate how memory can enhance learning.

15.
Nat Comput Sci ; 3(10): 883-893, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38177751

RESUMO

Scientists have developed hundreds of techniques to measure the interactions between pairs of processes in complex systems, but these computational methods-from contemporaneous correlation coefficients to causal inference methods-define and formulate interactions differently, using distinct quantitative theories that remain largely disconnected. Here we introduce a large assembled library of 237 statistics of pairwise interactions, and assess their behavior on 1,053 multivariate time series from a wide range of real-world and model-generated systems. Our analysis highlights commonalities between disparate mathematical formulations of interactions, providing a unified picture of a rich interdisciplinary literature. Using three real-world case studies, we then show that simultaneously leveraging diverse methods can uncover those most suitable for addressing a given problem, facilitating interpretable understanding of the quantitative formulation of pairwise dependencies that drive successful performance. Our results and accompanying software enable comprehensive analysis of time-series interactions by drawing on decades of diverse methodological contributions.

16.
Nat Commun ; 14(1): 6846, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891167

RESUMO

The human brain displays a rich repertoire of states that emerge from the microscopic interactions of cortical and subcortical neurons. Difficulties inherent within large-scale simultaneous neuronal recording limit our ability to link biophysical processes at the microscale to emergent macroscopic brain states. Here we introduce a microscale biophysical network model of layer-5 pyramidal neurons that display graded coarse-sampled dynamics matching those observed in macroscale electrophysiological recordings from macaques and humans. We invert our model to identify the neuronal spike and burst dynamics that differentiate unconscious, dreaming, and awake arousal states and provide insights into their functional signatures. We further show that neuromodulatory arousal can mediate different modes of neuronal dynamics around a low-dimensional energy landscape, which in turn changes the response of the model to external stimuli. Our results highlight the promise of multiscale modelling to bridge theories of consciousness across spatiotemporal scales.


Assuntos
Encéfalo , Neurônios , Animais , Humanos , Encéfalo/fisiologia , Neurônios/fisiologia , Estado de Consciência/fisiologia , Células Piramidais , Nível de Alerta , Macaca
17.
Cell Rep ; 42(8): 112844, 2023 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-37498741

RESUMO

The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of large-scale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.


Assuntos
Córtex Cerebral , Propofol , Tálamo , Estado de Consciência , Núcleos Talâmicos , Propofol/farmacologia , Vias Neurais
18.
Elife ; 112022 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35286256

RESUMO

The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during neural development. This lack of progress has been largely driven by the lack of effective estimators of information processing operations for spiking data. Here, we leverage recent advances in this estimation task in order to quantify the changes in transfer entropy during development. We do so by studying the changes in the intrinsic dynamics of the spontaneous activity of developing dissociated neural cell cultures. We find that the quantity of information flowing across these networks undergoes a dramatic increase across development. Moreover, the spatial structure of these flows exhibits a tendency to lock-in at the point when they arise. We also characterise the flow of information during the crucial periods of population bursts. We find that, during these bursts, nodes tend to undertake specialised computational roles as either transmitters, mediators, or receivers of information, with these roles tending to align with their average spike ordering. Further, we find that these roles are regularly locked-in when the information flows are established. Finally, we compare these results to information flows in a model network developing according to a spike-timing-dependent plasticity learning rule. Similar temporal patterns in the development of information flows were observed in these networks, hinting at the broader generality of these phenomena.


Assuntos
Modelos Neurológicos , Plasticidade Neuronal , Potenciais de Ação , Redes Neurais de Computação , Neurônios
19.
J Comput Neurosci ; 30(1): 85-107, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20799057

RESUMO

The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.


Assuntos
Mapeamento Encefálico , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Teoria da Informação , Modelos Neurológicos , Simulação por Computador , Entropia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Movimento/fisiologia , Rede Nervosa/irrigação sanguínea , Rede Nervosa/fisiologia , Oxigênio/sangue , Transferência de Experiência
20.
Netw Neurosci ; 5(2): 373-404, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34189370

RESUMO

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA