RESUMEN
How the human brain processes information during different cognitive tasks is one of the greatest questions in contemporary neuroscience. Understanding the statistical properties of brain signals during specific activities is one promising way to address this question. Here we analyze freely available data from implanted electrocorticography (ECoG) in five human subjects during two different cognitive tasks in the light of information theory quantifiers ideas. We employ a symbolic information approach to determine the probability distribution function associated with the time series from different cortical areas. Then we utilize these probabilities to calculate the associated Shannon entropy and a statistical complexity measure based on the disequilibrium between the actual time series and one with a uniform probability distribution function. We show that an Euclidian distance in the complexity-entropy plane and an asymmetry index for complexity are useful for comparing the two conditions. We show that our method can distinguish visual search epochs from blank screen intervals in different electrodes and patients. By using a multiscale approach and embedding time delays to downsample the data, we find important timescales in which the relevant information is being processed. We also determine cortical regions and time intervals along the 2-s-long trials that present more pronounced differences between the two cognitive tasks. Finally, we show that the method is useful to distinguish cognitive processes using brain activity on a trial-by-trial basis.
Asunto(s)
Cognición , Electrocorticografía , Humanos , Encéfalo/fisiología , Modelos Neurológicos , Teoría de la Información , EntropíaRESUMEN
The relation between electroencephalography (EEG) rhythms, brain functions, and behavioral correlates is well-established. Some physiological mechanisms underlying rhythm generation are understood, enabling the replication of brain rhythms in silico. This offers a pathway to explore connections between neural oscillations and specific neuronal circuits, potentially yielding fundamental insights into the functional properties of brain waves. Information theory frameworks, such as Integrated Information Decomposition (Φ-ID), relate dynamical regimes with informational properties, providing deeper insights into neuronal dynamic functions. Here, we investigate wave emergence in an excitatory/inhibitory (E/I) balanced network of integrate and fire neurons with short-term synaptic plasticity. This model produces a diverse range of EEG-like rhythms, from low δ waves to high-frequency oscillations. Through Φ-ID, we analyze the network's information dynamics and its relation with different emergent rhythms, elucidating the system's suitability for functions such as robust information transfer, storage, and parallel operation. Furthermore, our study helps to identify regimes that may resemble pathological states due to poor informational properties and high randomness. We found, e.g., that in silico ß and δ waves are associated with maximum information transfer in inhibitory and excitatory neuron populations, respectively, and that the coexistence of excitatory θ, α, and ß waves is associated to information storage. Additionally, we observed that high-frequency oscillations can exhibit either high or poor informational properties, potentially shedding light on ongoing discussions regarding physiological versus pathological high-frequency oscillations. In summary, our study demonstrates that dynamical regimes with similar oscillations may exhibit vastly different information dynamics. Characterizing information dynamics within these regimes serves as a potent tool for gaining insights into the functions of complex neuronal networks. Finally, our findings suggest that the use of information dynamics in both model and experimental data analysis, could help discriminate between oscillations associated with cognitive functions and those linked to neuronal disorders.
Asunto(s)
Simulación por Computador , Electroencefalografía , Modelos Neurológicos , Humanos , Encéfalo/fisiología , Biología Computacional , Ondas Encefálicas/fisiología , Neuronas/fisiología , Plasticidad Neuronal/fisiología , Red Nerviosa/fisiología , Teoría de la InformaciónRESUMEN
Identifying associations between phenotype and genotype is the fundamental basis of genetic analyses. Inspired by frequentist probability and the work of R. A. Fisher, genome-wide association studies (GWAS) extract information using averages and variances from genotype-phenotype datasets. Averages and variances are legitimated upon creating distribution density functions obtained through the grouping of data into categories. However, as data from within a given category cannot be differentiated, the investigative power of such methodologies is limited. Genomic informational field theory (GIFT) is a method specifically designed to circumvent this issue. The way GIFT proceeds is opposite to that of GWAS. Although GWAS determines the extent to which genes are involved in phenotype formation (bottom-up approach), GIFT determines the degree to which the phenotype can select microstates (genes) for its subsistence (top-down approach). Doing so requires dealing with new genetic concepts, a.k.a. genetic paths, upon which significance levels for genotype-phenotype associations can be determined. By using different datasets obtained in Ovis aries related to bone growth (dataset 1) and to a series of linked metabolic and epigenetic pathways (dataset 2), we demonstrate that removing the informational barrier linked to categories enhances the investigative and discriminative powers of GIFT, namely that GIFT extracts more information than GWAS. We conclude by suggesting that GIFT is an adequate tool to study how phenotypic plasticity and genetic assimilation are linked.NEW & NOTEWORTHY The genetic basis of complex traits remains challenging to investigate using classic genome-wide association studies (GWASs). Given the success of gene editing technologies, this point needs to be addressed urgently since there can only be useful editing technologies whether precise genotype-phenotype mapping information is available initially. Genomic informational field theory (GIFT) is a new mapping method designed to increase the investigative power of biological/medical datasets suggesting, in turn, the need to rethink the conceptual bases of quantitative genetics.
Asunto(s)
Estudio de Asociación del Genoma Completo , Genotipo , Fenotipo , Estudio de Asociación del Genoma Completo/métodos , Humanos , Genómica/métodos , Estudios de Asociación Genética/métodos , Teoría de la InformaciónRESUMEN
Real-world adversarial patches were shown to be successful in compromising state-of-the-art models in various computer vision applications. Most existing defenses rely on analyzing input or feature level gradients to detect the patch. However, these methods have been compromised by recent GAN-based attacks that generate naturalistic patches. In this paper, we propose a new perspective to defend against adversarial patches based on the entropy carried by the input, rather than on its saliency. We present Jedi, a new defense against adversarial patches that tackles the patch localization problem from an information theory perspective; leveraging the high entropy of adversarial patches to identify potential patch zones, and using an autoencoder to complete patch regions from high entropy kernels. Jedi achieves high-precision adversarial patch localization and removal, detecting on average 90% of adversarial patches across different benchmarks, and recovering up to 94% of successful patch attacks. Since Jedi relies on an input entropy analysis, it is model-agnostic, and can be applied to off-the-shelf models without changes to the training or inference of the models. Moreover, we propose a comprehensive qualitative analysis that investigates the cases where Jedi fails, comparatively with related methods. Interestingly, we find a significant core failure cases among the different defenses share one common property: high entropy. We think that this work offers a new perspective to understand the adversarial effect under physical-world settings. We also leverage these findings to enhance Jedi's handling of entropy outliers by introducing Adaptive Jedi, which boosts performance by up to 9% in challenging images.
Asunto(s)
Entropía , Teoría de la Información , Redes Neurales de la Computación , Algoritmos , HumanosRESUMEN
Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.
Asunto(s)
Biología Computacional , Teoría de la Información , Biología Computacional/métodos , Humanos , Estado de Conciencia/fisiología , Modelos Neurológicos , Algoritmos , Simulación por ComputadorRESUMEN
A resurgence of panpsychism and dualism is a matter of ongoing debate in modern neuroscience. Although metaphysically hostile, panpsychism and dualism both persist in the science of consciousness because the former is proposed as a straightforward answer to the problem of integrating consciousness into the fabric of physical reality, whereas the latter proposes a simple solution to the problem of free will by endowing consciousness with causal power as a prerequisite for moral responsibility. I take the Integrated Information Theory (IIT) as a paradigmatic exemplar of a theory of consciousness (ToC) that makes its commitments to panpsychism and dualism within a unified framework. These features are not, however, unique for IIT. Many ToCs are implicitly prone to some degree of panpsychism whenever they strive to propose a universal definition of consciousness, associated with one or another known phenomenon. Yet, those ToCs that can be characterized as strongly emergent are at risk of being dualist. A remedy against both covert dualism and uncomfortable corollaries of panpsychism can be found in the evolutionary theory of life, called here "bioprotopsychism" and generalized in terms of autopoiesis and the free energy principle. Bioprotopsychism provides a biologically inspired basis for a minimalist approach to consciousness via the triad "chemotaxis-efference copy mechanism-counterfactual active inference" by associating the stream of weakly emergent conscious states with an amount of information (best guesses) of the brain, engaged in unconscious predictive processing.
Asunto(s)
Estado de Conciencia , Estado de Conciencia/fisiología , Humanos , Teoría Psicológica , Teoría de la InformaciónRESUMEN
Recently, Graph Neural Networks (GNNs) have gained widespread application in automatic brain network classification tasks, owing to their ability to directly capture crucial information in non-Euclidean structures. However, two primary challenges persist in this domain. First, within the realm of clinical neuro-medicine, signals from cerebral regions are inevitably contaminated with noise stemming from physiological or external factors. The construction of brain networks heavily relies on set thresholds and feature information within brain regions, making it susceptible to the incorporation of such noises into the brain topology. Additionally, the static nature of the artificially constructed brain network's adjacent structure restricts real-time changes in brain topology. Second, mainstream GNN-based approaches tend to focus solely on capturing information interactions of nearest neighbor nodes, overlooking high-order topology features. In response to these challenges, we propose an adaptive unsupervised Spatial-Temporal Dynamic Hypergraph Information Bottleneck (ST-DHIB) framework for dynamically optimizing brain networks. Specifically, adopting an information theory perspective, Graph Information Bottleneck (GIB) is employed for purifying graph structure, and dynamically updating the processed input brain signals. From a graph theory standpoint, we utilize the designed Hypergraph Neural Network (HGNN) and Bi-LSTM to capture higher-order spatial-temporal context associations among brain channels. Comprehensive patient-specific and cross-patient experiments have been conducted on two available datasets. The results demonstrate the advancement and generalization of the proposed framework.
Asunto(s)
Encéfalo , Redes Neurales de la Computación , Humanos , Encéfalo/fisiología , Red Nerviosa/fisiología , Teoría de la InformaciónRESUMEN
Whether and how well people can behave randomly is of interest in many areas of psychological research. The ability to generate randomness is often investigated using random number generation (RNG) tasks, in which participants are asked to generate a sequence of numbers that is as random as possible. However, there is no consensus on how best to quantify the randomness of responses in human-generated sequences. Traditionally, psychologists have used measures of randomness that directly assess specific features of human behavior in RNG tasks, such as the tendency to avoid repetition or to systematically generate numbers that have not been generated in the recent choice history, a behavior known as cycling. Other disciplines have proposed measures of randomness that are based on a more rigorous mathematical foundation and are less restricted to specific features of randomness, such as algorithmic complexity. More recently, variants of these measures have been proposed to assess systematic patterns in short sequences. We report the first large-scale integrative study to compare measures of specific aspects of randomness with entropy-derived measures based on information theory and measures based on algorithmic complexity. We compare the ability of the different measures to discriminate between human-generated sequences and truly random sequences based on atmospheric noise, and provide a systematic analysis of how the usefulness of randomness measures is affected by sequence length. We conclude with recommendations that can guide the selection of appropriate measures of randomness in psychological research.
Asunto(s)
Algoritmos , Humanos , Masculino , Femenino , Teoría de la Información , Adulto , EntropíaRESUMEN
Psychological network approaches propose to see symptoms or questionnaire items as interconnected nodes, with links between them reflecting pairwise statistical dependencies evaluated on cross-sectional, time-series, or panel data. These networks constitute an established methodology to visualise and conceptualise the interactions and relative importance of nodes/indicators, providing an important complement to other approaches such as factor analysis. However, limiting the representation to pairwise relationships can neglect potentially critical information shared by groups of three or more variables (higher-order statistical interdependencies). To overcome this important limitation, here we propose an information-theoretic framework to assess these interdependencies and consequently to use hypergraphs as representations in psychometrics. As edges in hypergraphs are capable of encompassing several nodes together, this extension can thus provide a richer account on the interactions that may exist among sets of psychological variables. Our results show how psychometric hypergraphs can highlight meaningful redundant and synergistic interactions on either simulated or state-of-the-art, re-analysed psychometric datasets. Overall, our framework extends current network approaches while leading to new ways of assessing the data that differ at their core from other methods, enriching the psychometrics toolbox, and opening promising avenues for future investigation.
Asunto(s)
Psicometría , Psicometría/métodos , Psicometría/instrumentación , Humanos , Teoría de la Información , Encuestas y CuestionariosRESUMEN
Sexual selection plays a crucial role in modern evolutionary theory, offering valuable insight into evolutionary patterns and species diversity. Recently, a comprehensive definition of sexual selection has been proposed, defining it as any selection that arises from fitness differences associated with nonrandom success in the competition for access to gametes for fertilization. Previous research on discrete traits demonstrated that non-random mating can be effectively quantified using Jeffreys (or symmetrized Kullback-Leibler) divergence, capturing information acquired through mating influenced by mutual mating propensities instead of random occurrences. This novel theoretical framework allows for detecting and assessing the strength of sexual selection and assortative mating. In this study, we aim to achieve two primary objectives. Firstly, we demonstrate the seamless alignment of the previous theoretical development, rooted in information theory and mutual mating propensity, with the aforementioned definition of sexual selection. Secondly, we extend the theory to encompass quantitative traits. Our findings reveal that sexual selection and assortative mating can be quantified effectively for quantitative traits by measuring the information gain relative to the random mating pattern. The connection of the information indices of sexual selection with the classical measures of sexual selection is established. Additionally, if mating traits are normally distributed, the measure capturing the underlying information of assortative mating is a function of the square of the correlation coefficient, taking values within the non-negative real number set [0, +∞). It is worth noting that the same divergence measure captures information acquired through mating for both discrete and quantitative traits. This is interesting as it provides a common context and can help simplify the study of sexual selection patterns.
Asunto(s)
Teoría de la Información , Selección Sexual , Animales , Masculino , Evolución Biológica , Femenino , Selección Genética , Preferencia en el Apareamiento AnimalRESUMEN
Network structures of the brain have wiring patterns specialized for specific functions. These patterns are partially determined genetically or evolutionarily based on the type of task or stimulus. These wiring patterns are important in information processing; however, their organizational principles are not fully understood. This study frames the maximization of information transmission alongside the reduction of maintenance costs as a multi-objective optimization challenge, utilizing information theory and evolutionary computing algorithms with an emphasis on the visual system. The goal is to understand the underlying principles of circuit formation by exploring the patterns of wiring and information processing. The study demonstrates that efficient information transmission necessitates sparse circuits with internal modular structures featuring distinct wiring patterns. Significant trade-offs underscore the necessity of balance in wiring pattern development. The dynamics of effective circuits exhibit moderate flexibility in response to stimuli, in line with observations from prior visual system studies. Maximizing information transfer may allow for the self-organization of information processing functions similar to actual biological circuits, without being limited by modality. This study offers insights into neuroscience and the potential to improve reservoir computing performance.
Asunto(s)
Algoritmos , Humanos , Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Teoría de la InformaciónRESUMEN
Regressions, or backward saccades, are common during reading, accounting for between 5% and 20% of all saccades. And yet, relatively little is known about what causes them. We provide an information-theoretic operationalization for two previous qualitative hypotheses about regressions, which we dub reactivation and reanalysis. We argue that these hypotheses make different predictions about the pointwise mutual information or pmi between a regression's source and target. Intuitively, the pmi between two words measures how much more (or less) likely one word is to be present given the other. On one hand, the reactivation hypothesis predicts that regressions occur between words that are associated, implying high positive values of pmi. On the other hand, the reanalysis hypothesis predicts that regressions should occur between words that are not associated with each other, implying negative, low values of pmi. As a second theoretical contribution, we expand on previous theories by considering not only pmi but also expected values of pmi, E[pmi], where the expectation is taken over all possible realizations of the regression's target. The rationale for this is that language processing involves making inferences under uncertainty, and readers may be uncertain about what they have read, especially if a previous word was skipped. To test both theories, we use contemporary language models to estimate pmi-based statistics over word pairs in three corpora of eye tracking data in English, as well as in six languages across three language families (Indo-European, Uralic, and Turkic). Our results are consistent across languages and models tested: Positive values of pmi and E[pmi] consistently help to predict the patterns of regressions during reading, whereas negative values of pmi and E[pmi] do not. Our information-theoretic interpretation increases the predictive scope of both theories and our studies present the first systematic crosslinguistic analysis of regressions in the literature. Our results support the reactivation hypothesis and, more broadly, they expand the number of language processing behaviors that can be linked to information-theoretic principles.
Asunto(s)
Lectura , Movimientos Sacádicos , Humanos , Movimientos Sacádicos/fisiología , Teoría de la Información , Adulto , Psicolingüística , Adulto JovenRESUMEN
Consciousness science is marred by disparate constructs and methodologies, making it challenging to systematically compare theories. This foundational crisis casts doubts on the scientific character of the field itself. Addressing it, we propose a framework for systematically comparing consciousness theories by introducing a novel inter-theory classification interface, the Measure Centrality Index (MCI). Recognizing its gradient distribution, the MCI assesses the degree of importance a specific empirical measure has for a given consciousness theory. We apply the MCI to probe how the empirical measures of the Global Neuronal Workspace Theory (GNW), Integrated Information Theory (IIT), and Temporospatial Theory of Consciousness (TTC) would fare within the context of the other two. We demonstrate that direct comparison of IIT, GNW, and TTC is meaningful and valid for some measures like Lempel-Ziv Complexity (LZC), Autocorrelation Window (ACW), and possibly Mutual Information (MI). In contrast, it is problematic for others like the anatomical and physiological neural correlates of consciousness (NCC) due to their MCI-based differential weightings within the structure of the theories. In sum, we introduce and provide proof-of-principle of a novel systematic method for direct inter-theory empirical comparisons, thereby addressing isolated evolution of theories and confirmatory bias issues in the state-of-the-art neuroscience of consciousness.
Asunto(s)
Estado de Conciencia , Estado de Conciencia/fisiología , Humanos , Teoría de la Información , Encéfalo/fisiología , Encéfalo/fisiopatología , Teoría PsicológicaRESUMEN
Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.
Asunto(s)
Teoría de la Información , Plasticidad Neuronal , Sinapsis , Animales , Sinapsis/fisiología , Plasticidad Neuronal/fisiología , Espinas Dendríticas/fisiología , Región CA1 Hipocampal/fisiología , Modelos Neurológicos , Almacenamiento y Recuperación de la Información , Masculino , Hipocampo/fisiología , RatasRESUMEN
Multi-view graph pooling utilizes information from multiple perspectives to generate a coarsened graph, exhibiting superior performance in graph-level tasks. However, existing methods mainly focus on the types of multi-view information to improve graph pooling operations, lacking explicit control over the pooling process and theoretical analysis of the relationships between views. In this paper, we rethink the current paradigm of multi-view graph pooling from an information theory perspective, subsequently introducing GDMGP, an innovative method for multi-view graph pooling derived from the principles of graph disentanglement. This approach effectively simplifies the original graph into a more structured, disentangled coarsened graph, enhancing the clarity and utility of the graph representation. Our approach begins with the design of a novel view mapper that dynamically integrates the node and topology information of the original graph. This integration enhances its information sufficiency. Next, we introduce a view fusion mechanism based on conditional entropy to accurately regulate the task-relevant information in the views, aiming to minimize information loss in the pooling process. Finally, to further enhance the expressiveness of the coarsened graph, we disentangle the fused view into task-relevant and task-irrelevant subgraphs through mutual information minimization, retaining the task-relevant subgraph for downstream tasks. We theoretically demonstrate that the performance of the coarsened graph generated by our GDMGP is superior to that of any single input view. The effectiveness of GDMGP is further validated by experimental results on seven public datasets.
Asunto(s)
Teoría de la Información , EntropíaRESUMEN
Ervin Bauer (1890-1938) was the first to build a general molecular-based biological theory. He defined the basic principles of theoretical biology from a thermodynamic perspective, focusing on the capacity of biological systems to produce and support the state of sustainable non-equilibrium. His central work "Theoretical Biology" (1935) was written long before modern advances in molecular biology, genetics, and information theory. Ervin Bauer and his wife Stefánia were executed in Stalin's Great Terror. This paper presents a brief introduction to Ervin Bauer's life and includes his short biography.
Asunto(s)
Biología , Humanos , Biología/historia , Teoría de la Información , Biología Molecular , Termodinámica , Historia del Siglo XIX , Historia del Siglo XXRESUMEN
Consciousness is one of the most complex aspects of human experience. Studying the mechanisms involved in the transitions among different levels of consciousness remains as one of the greatest challenges in neuroscience. In this study we use a measure of integrated information (ΦAR) to evaluate dynamic changes during consciousness transitions. We applied the measure to intracranial electroencephalography (SEEG) recordings collected from 6 patients that suffer from refractory epilepsy, taking into account inter-ictal, pre-ictal and ictal periods. We analyzed the dynamical evolution of ΦAR in groups of electrode contacts outside the epileptogenic region and compared it with the Consciousness Seizure Scale (CCS). We show that changes on ΦAR are significantly correlated with changes in the reported states of consciousness.
Asunto(s)
Epilepsia , Cristalino , Unionidae , Humanos , Animales , Estado de Conciencia , Teoría de la Información , ConvulsionesRESUMEN
Voltage-clamp experiments are commonly utilised to characterise cellular ion channel kinetics. In these experiments, cells are stimulated using a known time-varying voltage, referred to as the voltage protocol, and the resulting cellular response, typically in the form of current, is measured. Parameters of models that describe ion channel kinetics are then estimated by solving an inverse problem which aims to minimise the discrepancy between the predicted response of the model and the actual measured cell response. In this paper, a novel framework to evaluate the information content of voltage-clamp protocols in relation to ion channel model parameters is presented. Additional quantitative information metrics that allow for comparisons among various voltage protocols are proposed. These metrics offer a foundation for future optimal design frameworks to devise novel, information-rich protocols. The efficacy of the proposed framework is evidenced through the analysis of seven voltage protocols from the literature. By comparing known numerical results for inverse problems using these protocols with the information-theoretic metrics, the proposed approach is validated. The essential steps of the framework are: (i) generate random samples of the parameters from chosen prior distributions; (ii) run the model to generate model output (current) for all samples; (iii) construct reduced-dimensional representations of the time-varying current output using proper orthogonal decomposition (POD); (iv) estimate information-theoretic metrics such as mutual information, entropy equivalent variance, and conditional mutual information using non-parametric methods; (v) interpret the metrics; for example, a higher mutual information between a parameter and the current output suggests the protocol yields greater information about that parameter, resulting in improved identifiability; and (vi) integrate the information-theoretic metrics into a single quantitative criterion, encapsulating the protocol's efficacy in estimating model parameters.