Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.912
Filter
1.
Neural Comput ; 36(5): 781-802, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38658027

ABSTRACT

Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.


Subject(s)
Information Theory , Neuronal Plasticity , Synapses , Animals , Synapses/physiology , Neuronal Plasticity/physiology , Dendritic Spines/physiology , CA1 Region, Hippocampal/physiology , Models, Neurological , Information Storage and Retrieval , Male , Hippocampus/physiology , Rats
2.
Neurosci Biobehav Rev ; 161: 105670, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38615851

ABSTRACT

Consciousness science is marred by disparate constructs and methodologies, making it challenging to systematically compare theories. This foundational crisis casts doubts on the scientific character of the field itself. Addressing it, we propose a framework for systematically comparing consciousness theories by introducing a novel inter-theory classification interface, the Measure Centrality Index (MCI). Recognizing its gradient distribution, the MCI assesses the degree of importance a specific empirical measure has for a given consciousness theory. We apply the MCI to probe how the empirical measures of the Global Neuronal Workspace Theory (GNW), Integrated Information Theory (IIT), and Temporospatial Theory of Consciousness (TTC) would fare within the context of the other two. We demonstrate that direct comparison of IIT, GNW, and TTC is meaningful and valid for some measures like Lempel-Ziv Complexity (LZC), Autocorrelation Window (ACW), and possibly Mutual Information (MI). In contrast, it is problematic for others like the anatomical and physiological neural correlates of consciousness (NCC) due to their MCI-based differential weightings within the structure of the theories. In sum, we introduce and provide proof-of-principle of a novel systematic method for direct inter-theory empirical comparisons, thereby addressing isolated evolution of theories and confirmatory bias issues in the state-of-the-art neuroscience of consciousness.


Subject(s)
Consciousness , Consciousness/physiology , Humans , Information Theory , Brain/physiology , Brain/physiopathology , Psychological Theory
3.
Int J Numer Method Biomed Eng ; 40(5): e3815, 2024 May.
Article in English | MEDLINE | ID: mdl-38544355

ABSTRACT

Voltage-clamp experiments are commonly utilised to characterise cellular ion channel kinetics. In these experiments, cells are stimulated using a known time-varying voltage, referred to as the voltage protocol, and the resulting cellular response, typically in the form of current, is measured. Parameters of models that describe ion channel kinetics are then estimated by solving an inverse problem which aims to minimise the discrepancy between the predicted response of the model and the actual measured cell response. In this paper, a novel framework to evaluate the information content of voltage-clamp protocols in relation to ion channel model parameters is presented. Additional quantitative information metrics that allow for comparisons among various voltage protocols are proposed. These metrics offer a foundation for future optimal design frameworks to devise novel, information-rich protocols. The efficacy of the proposed framework is evidenced through the analysis of seven voltage protocols from the literature. By comparing known numerical results for inverse problems using these protocols with the information-theoretic metrics, the proposed approach is validated. The essential steps of the framework are: (i) generate random samples of the parameters from chosen prior distributions; (ii) run the model to generate model output (current) for all samples; (iii) construct reduced-dimensional representations of the time-varying current output using proper orthogonal decomposition (POD); (iv) estimate information-theoretic metrics such as mutual information, entropy equivalent variance, and conditional mutual information using non-parametric methods; (v) interpret the metrics; for example, a higher mutual information between a parameter and the current output suggests the protocol yields greater information about that parameter, resulting in improved identifiability; and (vi) integrate the information-theoretic metrics into a single quantitative criterion, encapsulating the protocol's efficacy in estimating model parameters.


Subject(s)
Ion Channels , Kinetics , Ion Channels/metabolism , Patch-Clamp Techniques , Information Theory , Algorithms , Models, Biological , Humans
4.
Neural Netw ; 174: 106221, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38447426

ABSTRACT

Multi-view graph pooling utilizes information from multiple perspectives to generate a coarsened graph, exhibiting superior performance in graph-level tasks. However, existing methods mainly focus on the types of multi-view information to improve graph pooling operations, lacking explicit control over the pooling process and theoretical analysis of the relationships between views. In this paper, we rethink the current paradigm of multi-view graph pooling from an information theory perspective, subsequently introducing GDMGP, an innovative method for multi-view graph pooling derived from the principles of graph disentanglement. This approach effectively simplifies the original graph into a more structured, disentangled coarsened graph, enhancing the clarity and utility of the graph representation. Our approach begins with the design of a novel view mapper that dynamically integrates the node and topology information of the original graph. This integration enhances its information sufficiency. Next, we introduce a view fusion mechanism based on conditional entropy to accurately regulate the task-relevant information in the views, aiming to minimize information loss in the pooling process. Finally, to further enhance the expressiveness of the coarsened graph, we disentangle the fused view into task-relevant and task-irrelevant subgraphs through mutual information minimization, retaining the task-relevant subgraph for downstream tasks. We theoretically demonstrate that the performance of the coarsened graph generated by our GDMGP is superior to that of any single input view. The effectiveness of GDMGP is further validated by experimental results on seven public datasets.


Subject(s)
Information Theory , Entropy
5.
Sci Rep ; 14(1): 5355, 2024 03 04.
Article in English | MEDLINE | ID: mdl-38438478

ABSTRACT

Consciousness is one of the most complex aspects of human experience. Studying the mechanisms involved in the transitions among different levels of consciousness remains as one of the greatest challenges in neuroscience. In this study we use a measure of integrated information (ΦAR) to evaluate dynamic changes during consciousness transitions. We applied the measure to intracranial electroencephalography (SEEG) recordings collected from 6 patients that suffer from refractory epilepsy, taking into account inter-ictal, pre-ictal and ictal periods. We analyzed the dynamical evolution of ΦAR in groups of electrode contacts outside the epileptogenic region and compared it with the Consciousness Seizure Scale (CCS). We show that changes on ΦAR are significantly correlated with changes in the reported states of consciousness.


Subject(s)
Epilepsy , Lens, Crystalline , Unionidae , Humans , Animals , Consciousness , Information Theory , Seizures
6.
Cell ; 187(5): 1101-1102, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38428390
7.
8.
Biosystems ; 238: 105191, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38508229

ABSTRACT

Ervin Bauer (1890-1938) was the first to build a general molecular-based biological theory. He defined the basic principles of theoretical biology from a thermodynamic perspective, focusing on the capacity of biological systems to produce and support the state of sustainable non-equilibrium. His central work "Theoretical Biology" (1935) was written long before modern advances in molecular biology, genetics, and information theory. Ervin Bauer and his wife Stefánia were executed in Stalin's Great Terror. This paper presents a brief introduction to Ervin Bauer's life and includes his short biography.


Subject(s)
Biology , Humans , Biology/history , Information Theory , Molecular Biology , Thermodynamics , History, 19th Century , History, 20th Century
9.
BMC Bioinformatics ; 25(1): 57, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38317067

ABSTRACT

BACKGROUND: Controlling the False Discovery Rate (FDR) in Multiple Comparison Procedures (MCPs) has widespread applications in many scientific fields. Previous studies show that the correlation structure between test statistics increases the variance and bias of FDR. The objective of this study is to modify the effect of correlation in MCPs based on the information theory. We proposed three modified procedures (M1, M2, and M3) under strong, moderate, and mild assumptions based on the conditional Fisher Information of the consecutive sorted test statistics for controlling the false discovery rate under arbitrary correlation structure. The performance of the proposed procedures was compared with the Benjamini-Hochberg (BH) and Benjamini-Yekutieli (BY) procedures in simulation study and real high-dimensional data of colorectal cancer gene expressions. In the simulation study, we generated 1000 differential multivariate Gaussian features with different levels of the correlation structure and screened the significance features by the FDR controlling procedures, with strong control on the Family Wise Error Rates. RESULTS: When there was no correlation between 1000 simulated features, the performance of the BH procedure was similar to the three proposed procedures. In low to medium correlation structures the BY procedure is too conservative. The BH procedure is too liberal, and the mean number of screened features was constant at the different levels of the correlation between features. The mean number of screened features by proposed procedures was between BY and BH procedures and reduced when the correlations increased. Where the features are highly correlated the number of screened features by proposed procedures reached the Bonferroni (BF) procedure, as expected. In real data analysis the BY, BH, M1, M2, and M3 procedures were done to screen gene expressions of colorectal cancer. To fit a predictive model based on the screened features the Efficient Bayesian Logistic Regression (EBLR) model was used. The fitted EBLR models based on the screened features by M1 and M2 procedures have minimum entropies and are more efficient than BY and BH procedures. CONCLUSION: The modified proposed procedures based on information theory, are much more flexible than BH and BY procedures for the amount of correlation between test statistics. The modified procedures avoided screening the non-informative features and so the number of screened features reduced with the increase in the level of correlation.


Subject(s)
Colorectal Neoplasms , Information Theory , Humans , Bayes Theorem , Genomics , Computer Simulation
10.
Forensic Sci Int Genet ; 70: 103025, 2024 May.
Article in English | MEDLINE | ID: mdl-38382248

ABSTRACT

Missing person cases typically require a genetic kinship test to determine the relationship between an unidentified individual and the relatives of the missing person. When not enough genetic evidence has been collected the lack of statistical power of these tests might lead to unreliable results. This is particularly true when just a few distant relatives are available for genotyping. In this contribution, we considered a Bayesian network approach for kinship testing and proposed several information theoretic metrics in order to quantitatively evaluate the information content of pedigrees. We show how these statistics are related to the widely used likelihood ratio values and could be employed to efficiently prioritize family members in order to optimize the statistical power in missing person problems. Our methodology seamlessly integrates with Bayesian modeling approaches, like the GENis platform that we have recently developed for high-throughput missing person identification tasks. Furthermore, our approach can also be easily incorporated into Elston-Stewart forensic frameworks. To facilitate the application of our methodology, we have developed the forensIT package, freely available on CRAN repository, which implements all the methodologies described in our manuscript.


Subject(s)
DNA Fingerprinting , Information Theory , Humans , DNA Fingerprinting/methods , Likelihood Functions , Bayes Theorem , Pedigree
11.
PLoS Comput Biol ; 20(2): e1010706, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38377108

ABSTRACT

Pattern separation is a valuable computational function performed by neuronal circuits, such as the dentate gyrus, where dissimilarity between inputs is increased, reducing noise and increasing the storage capacity of downstream networks. Pattern separation is studied from both in vivo experimental and computational perspectives and, a number of different measures (such as orthogonalisation, decorrelation, or spike train distance) have been applied to quantify the process of pattern separation. However, these are known to give conclusions that can differ qualitatively depending on the choice of measure and the parameters used to calculate it. We here demonstrate that arbitrarily increasing sparsity, a noticeable feature of dentate granule cell firing and one that is believed to be key to pattern separation, typically leads to improved classical measures for pattern separation even, inappropriately, up to the point where almost all information about the inputs is lost. Standard measures therefore both cannot differentiate between pattern separation and pattern destruction, and give results that may depend on arbitrary parameter choices. We propose that techniques from information theory, in particular mutual information, transfer entropy, and redundancy, should be applied to penalise the potential for lost information (often due to increased sparsity) that is neglected by existing measures. We compare five commonly-used measures of pattern separation with three novel techniques based on information theory, showing that the latter can be applied in a principled way and provide a robust and reliable measure for comparing the pattern separation performance of different neurons and networks. We demonstrate our new measures on detailed compartmental models of individual dentate granule cells and a dentate microcircuit, and show how structural changes associated with epilepsy affect pattern separation performance. We also demonstrate how our measures of pattern separation can predict pattern completion accuracy. Overall, our measures solve a widely acknowledged problem in assessing the pattern separation of neural circuits such as the dentate gyrus, as well as the cerebellum and mushroom body. Finally we provide a publicly available toolbox allowing for easy analysis of pattern separation in spike train ensembles.


Subject(s)
Dentate Gyrus , Information Theory , Dentate Gyrus/physiology , Neurons/physiology , Brain , Models, Neurological
12.
Neural Netw ; 172: 106125, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38320348

ABSTRACT

Graph Contrastive Learning (GCL) is increasingly employed in graph representation learning with the primary aim of learning node/graph representations from a predefined pretext task that can generalize to various downstream tasks. Meanwhile, the transition from a specific pretext task to diverse and unpredictable downstream tasks poses a significant challenge for GCL's generalization ability. Most existing GCL approaches maximize mutual information between two views derived from the original graph, either randomly or heuristically. However, the generalization ability of GCL and its theoretical principles are still less studied. In this paper, we introduce a novel metric GCL-GE, to quantify the generalization gap between predefined pretext and agnostic downstream tasks. Given the inherent intractability of GCL-GE, we leverage concepts from information theory to derive a mutual information upper bound that is independent of the downstream tasks, thus enabling the metric's optimization despite the variability in downstream tasks. Based on the theoretical insight, we propose InfoAdv, a GCL framework to directly enhance generalization by jointly optimizing GCL-GE and InfoMax. Extensive experiments validate the capability of InfoAdv to enhance performance across a wide variety of downstream tasks, demonstrating its effectiveness in improving the generalizability of GCL.


Subject(s)
Information Theory , Learning , Generalization, Psychological
13.
Sci Rep ; 14(1): 1181, 2024 01 12.
Article in English | MEDLINE | ID: mdl-38216607

ABSTRACT

Shannon entropy is a core concept in machine learning and information theory, particularly in decision tree modeling. To date, no studies have extensively and quantitatively applied Shannon entropy in a systematic way to quantify the entropy of clinical situations using diagnostic variables (true and false positives and negatives, respectively). Decision tree representations of medical decision-making tools can be generated using diagnostic variables found in literature and entropy removal can be calculated for these tools. This concept of clinical entropy removal has significant potential for further use to bring forth healthcare innovation, such as quantifying the impact of clinical guidelines and value of care and applications to Emergency Medicine scenarios where diagnostic accuracy in a limited time window is paramount. This analysis was done for 623 diagnostic tools and provided unique insights into their utility. For studies that provided detailed data on medical decision-making algorithms, bootstrapped datasets were generated from source data to perform comprehensive machine learning analysis on these algorithms and their constituent steps, which revealed a novel and thorough evaluation of medical diagnostic algorithms.


Subject(s)
Algorithms , Clinical Decision-Making , Entropy , Machine Learning , Information Theory
14.
Nat Aging ; 3(12): 1486-1499, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38102202

ABSTRACT

Information storage and retrieval is essential for all life. In biology, information is primarily stored in two distinct ways: the genome, comprising nucleic acids, acts as a foundational blueprint and the epigenome, consisting of chemical modifications to DNA and histone proteins, regulates gene expression patterns and endows cells with specific identities and functions. Unlike the stable, digital nature of genetic information, epigenetic information is stored in a digital-analog format, susceptible to alterations induced by diverse environmental signals and cellular damage. The Information Theory of Aging (ITOA) states that the aging process is driven by the progressive loss of youthful epigenetic information, the retrieval of which via epigenetic reprogramming can improve the function of damaged and aged tissues by catalyzing age reversal.


Subject(s)
DNA Methylation , Epigenesis, Genetic , Information Theory , Histones/genetics
15.
PLoS One ; 18(11): e0290047, 2023.
Article in English | MEDLINE | ID: mdl-37943841

ABSTRACT

In graph theory, a topological index is a numerical value that is in good correlation with certain physical properties of a molecule. It serves as an indicator of how a chemical structure behaves. The Shannon's entropy describes a comparable loss of data in information transmission networks. It has found use in the field of information theory. Inspired by the concept of Shannon's entropy, we have calculated some topological descriptors for fractal and Cayley-type dendrimer trees. We also find the entropy that is predicted by these indices.


Subject(s)
Fractals , Information Theory , Entropy
16.
PLoS Comput Biol ; 19(10): e1011465, 2023 10.
Article in English | MEDLINE | ID: mdl-37847724

ABSTRACT

This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.


Subject(s)
Brain , Information Theory , Models, Neurological , Consciousness
17.
PLoS Comput Biol ; 19(10): e1011346, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37862364

ABSTRACT

The Free Energy Principle (FEP) and Integrated Information Theory (IIT) are two ambitious theoretical approaches. The first aims to make a formal framework for describing self-organizing and life-like systems in general, and the second attempts a mathematical theory of conscious experience based on the intrinsic properties of a system. They are each concerned with complementary aspects of the properties of systems, one with life and behavior, the other with meaning and experience, so combining them has potential for scientific value. In this paper, we take a first step towards such a synthesis by expanding on the results of an earlier published evolutionary simulation study, which show a relationship between IIT-measures and fitness in differing complexities of tasks. We relate a basic information theoretic measure from the FEP, surprisal, to this result, finding that the surprisal of simulated agents' observations is inversely related to the general increase in fitness and integration over evolutionary time. Moreover, surprisal fluctuates together with IIT-based consciousness measures in within-trial time. This suggests that the consciousness measures used in IIT indirectly depend on the relation between the agent and the external world, and that it should therefore be possible to relate them to the theoretical concepts used in the FEP. Lastly, we suggest a future approach for investigating this relationship empirically.


Subject(s)
Brain , Information Theory , Models, Neurological , Consciousness , Computer Simulation
18.
J R Soc Interface ; 20(207): 20230443, 2023 10.
Article in English | MEDLINE | ID: mdl-37817583

ABSTRACT

Understanding the mechanism sustaining cardiac fibrillation can facilitate the personalization of treatment. Granger causality analysis can be used to determine the existence of a hierarchical fibrillation mechanism that is more amenable to ablation treatment in cardiac time-series data. Conventional Granger causality based on linear predictability may fail if the assumption is not met or given sparsely sampled, high-dimensional data. More recently developed information theory-based causality measures could potentially provide a more accurate estimate of the nonlinear coupling. However, despite their successful application to linear and nonlinear physical systems, their use is not known in the clinical field. Partial mutual information from mixed embedding (PMIME) was implemented to identify the direct coupling of cardiac electrophysiology signals. We show that PMIME requires less data and is more robust to extrinsic confounding factors. The algorithms were then extended for efficient characterization of fibrillation organization and hierarchy using clinical high-dimensional data. We show that PMIME network measures correlate well with the spatio-temporal organization of fibrillation and demonstrated that hierarchical type of fibrillation and drivers could be identified in a subset of ventricular fibrillation patients, such that regions of high hierarchy are associated with high dominant frequency.


Subject(s)
Algorithms , Information Theory , Humans , Nonlinear Dynamics
19.
Bioinformatics ; 39(10)2023 10 03.
Article in English | MEDLINE | ID: mdl-37758248

ABSTRACT

MOTIVATION: Optical genome mapping (OGM) is a technique that extracts partial genomic information from optically imaged and linearized DNA fragments containing fluorescently labeled short sequence patterns. This information can be used for various genomic analyses and applications, such as the detection of structural variations and copy-number variations, epigenomic profiling, and microbial species identification. Currently, the choice of labeled patterns is based on the available biochemical methods and is not necessarily optimized for the application. RESULTS: In this work, we develop a model of OGM based on information theory, which enables the design of optimal labeling patterns for specific applications and target organism genomes. We validated the model through experimental OGM on human DNA and simulations on bacterial DNA. Our model predicts up to 10-fold improved accuracy by optimal choice of labeling patterns, which may guide future development of OGM biochemical labeling methods and significantly improve its accuracy and yield for applications such as epigenomic profiling and cultivation-free pathogen identification in clinical samples. AVAILABILITY AND IMPLEMENTATION: https://github.com/yevgenin/PatternCode.


Subject(s)
Information Theory , Software , Humans , Genome , Restriction Mapping , DNA
20.
PLoS One ; 18(9): e0289958, 2023.
Article in English | MEDLINE | ID: mdl-37729293

ABSTRACT

This study evaluated 72 universities' performance innovation during 2011 to 2019 of panel data, using the data envelopment analysis-Malmquist method. The study used benchmark regression to analyse the relationship between digital finance and the universities' innovation performance. The aim was to improve innovation performance and promote national innovation across countries. According to the results of the empirical analysis, digital finance positively affects innovation performance. That finding was confirmed through advanced robustness test evaluation, such as limited information maximum likelihood, two-stage least squares, and interactive fixed effects. Moreover, based on information theory, the digital finance influence mechanism improves credit demand and financial efficiency. Additionally, innovation performance survived spatial overflow effects. Lastly, the paper concludes with some implications for improving digital financial coverage and constructing innovation networks among universities.


Subject(s)
Benchmarking , Information Theory , Universities , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL
...