Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Phys Rev E ; 108(1-1): 014101, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37583218

RESUMO

A fundamental problem in the analysis of complex systems is getting a reliable estimate of the entropy of their probability distributions over the state space. This is difficult because unsampled states can contribute substantially to the entropy, while they do not contribute to the maximum likelihood estimator of entropy, which replaces probabilities by the observed frequencies. Bayesian estimators overcome this obstacle by introducing a model of the low-probability tail of the probability distribution. Which statistical features of the observed data determine the model of the tail, and hence the output of such estimators, remains unclear. Here we show that well-known entropy estimators for probability distributions on discrete state spaces model the structure of the low-probability tail based largely on a few statistics of the data: the sample size, the maximum likelihood estimate, the number of coincidences among the samples, and the dispersion of the coincidences. We derive approximate analytical entropy estimators for undersampled distributions based on these statistics, and we use the results to propose an intuitive understanding of how the Bayesian entropy estimators work.

2.
Elife ; 112022 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-35315769

RESUMO

The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here, we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, we used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.


Assuntos
Tentilhões , Potenciais de Ação/fisiologia , Animais , Teorema de Bayes , Tentilhões/fisiologia , Aprendizagem/fisiologia , Vocalização Animal/fisiologia
3.
Entropy (Basel) ; 24(1)2022 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-35052151

RESUMO

Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature.

4.
Elife ; 102021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-34473052

RESUMO

Although different animal species often exhibit extensive variation in many behaviors, typically scientists examine one or a small number of behaviors in any single study. Here, we propose a new framework to simultaneously study the evolution of many behaviors. We measured the behavioral repertoire of individuals from six species of fruit flies using unsupervised techniques and identified all stereotyped movements exhibited by each species. We then fit a Generalized Linear Mixed Model to estimate the intra- and inter-species behavioral covariances, and, by using the known phylogenetic relationships among species, we estimated the (unobserved) behaviors exhibited by ancestral species. We found that much of intra-specific behavioral variation has a similar covariance structure to previously described long-time scale variation in an individual's behavior, suggesting that much of the measured variation between individuals of a single species in our assay reflects differences in the status of neural networks, rather than genetic or developmental differences between individuals. We then propose a method to identify groups of behaviors that appear to have evolved in a correlated manner, illustrating how sets of behaviors, rather than individual behaviors, likely evolved. Our approach provides a new framework for identifying co-evolving behaviors and may provide new opportunities to study the mechanistic basis of behavioral evolution.


Assuntos
Comportamento Animal , Evolução Biológica , Animais , Comportamento Animal/classificação , Comportamento Animal/fisiologia , Drosophila/classificação , Drosophila/fisiologia , Masculino , Modelos Estatísticos , Filogenia , Gravação em Vídeo
5.
Entropy (Basel) ; 21(6)2019 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-33267337

RESUMO

Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables-the one with minimal entropy-is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.

6.
Artigo em Inglês | MEDLINE | ID: mdl-26382460

RESUMO

We develop the information-theoretical concepts required to study the statistical dependencies among three variables. Some of such dependencies are pure triple interactions, in the sense that they cannot be explained in terms of a combination of pairwise correlations. We derive bounds for triple dependencies, and characterize the shape of the joint probability distribution of three binary variables with high triple interaction. The analysis also allows us to quantify the amount of redundancy in the mutual information between pairs of variables, and to assess whether the information between two variables is or is not mediated by a third variable. These concepts are applied to the analysis of written texts. We find that the probability that a given word is found in a particular location within the text is not only modulated by the presence or absence of other nearby words, but also, on the presence or absence of nearby pairs of words. We identify the words enclosing the key semantic concepts of the text, the triplets of words with high pairwise and triple interactions, and the words that mediate the pairwise interactions between other words.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA