Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Rev E ; 105(3-1): 034403, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35428091

RESUMO

We address the problem of evaluating the transfer entropy (TE) produced by biochemical reactions from experimentally measured data. Although these reactions are generally nonlinear and nonstationary processes making it challenging to achieve accurate modeling, Gaussian approximation can facilitate the TE assessment only by estimating covariance matrices using multiple data obtained from simultaneously measured time series representing the activation levels of biomolecules such as proteins. Nevertheless, the nonstationary nature of biochemical signals makes it difficult to theoretically assess the sampling distributions of TE, which are necessary for evaluating the statistical confidence and significance of the data-driven estimates. We resolve this difficulty by computationally assessing the sampling distributions using techniques from computational statistics. The computational methods are tested by using them in analyzing data generated from a theoretically tractable time-varying signal model, which leads to the development of a method to screen only statistically significant estimates. The usefulness of the developed method is examined by applying it to real biological data experimentally measured from the ERBB-RAS-MAPK system that superintends diverse cell fate decisions. A comparison between cells containing wild-type and mutant proteins exhibits a distinct difference in the time evolution of TE while any apparent difference is hardly found in average profiles of the raw signals. Such a comparison may help in unveiling important pathways of biochemical reactions.

2.
Neural Comput ; 32(11): 2187-2211, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32946715

RESUMO

Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch-Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.


Assuntos
Modelos Neurológicos , Modelos Estatísticos , Neurônios/fisiologia , Animais , Simulação por Computador , Humanos
3.
Phys Rev E ; 99(1-1): 010301, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30780211

RESUMO

Modularity maximization using greedy algorithms continues to be a popular approach toward community detection in graphs, even after various better forming algorithms have been proposed. Apart from its clear mechanism and ease of implementation, this approach is persistently popular because, presumably, its risk of algorithmic failure is not well understood. This Rapid Communication provides insight into this issue by estimating the algorithmic performance limit of the stochastic block model inference using modularity maximization. This is achieved by counting the number of metastable states under a local update rule. Our results offer a quantitative insight into the level of sparsity at which a greedy algorithm typically fails.

4.
Phys Rev E ; 100(6-1): 062101, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31962393

RESUMO

The minimum vertex cover (Min-VC) problem is a well-known NP-hard problem. Earlier studies illustrate that the problem defined over the Erdös-Rényi random graph with a mean degree c exhibits computational difficulty in searching the Min-VC set above a critical point c=e=2.718.... Here, we address how this difficulty is influenced by the mesoscopic structures of graphs. For this, we evaluate the critical condition of difficulty for the stochastic block model. We perform a detailed examination of the specific cases of two equal-size communities characterized by in and out degrees, which are denoted by c_{in} and c_{out}, respectively. Our analysis based on the cavity method indicates that the solution search once becomes difficult when c_{in}+c_{out} exceeds e from below, but becomes easy again when c_{out} is sufficiently larger than c_{in} in the region c_{out}>e. Experiments based on various search algorithms support the theoretical prediction.

5.
Phys Rev E ; 97(6-1): 062112, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30011500

RESUMO

We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 10^{10} parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

6.
Phys Rev E ; 97(2-1): 022315, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29548181

RESUMO

We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

7.
PLoS One ; 12(12): e0188012, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29216215

RESUMO

We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.


Assuntos
Modelos Teóricos , Algoritmos , Modelos Lineares
8.
Sci Rep ; 7(1): 3327, 2017 06 12.
Artigo em Inglês | MEDLINE | ID: mdl-28607441

RESUMO

Network science investigates methodologies that summarise relational data to obtain better interpretability. Identifying modular structures is a fundamental task, and assessment of the coarse-grain level is its crucial step. Here, we propose principled, scalable, and widely applicable assessment criteria to determine the number of clusters in modular networks based on the leave-one-out cross-validation estimate of the edge prediction error.

9.
Phys Rev E ; 95(1-1): 012304, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28208358

RESUMO

We investigate the detectability thresholds of various modular structures in the stochastic block model. Our analysis reveals how the detectability threshold is related to the details of the modular pattern, including the hierarchy of the clusters. We show that certain planted structures are impossible to infer regardless of their fuzziness.

10.
Phys Rev E ; 94(3-1): 032308, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27739839

RESUMO

In this study we investigate the resilience of duplex networked layers α and ß coupled with antagonistic interlinks, each layer of which inhibits its counterpart at the microscopic level, changing the following factors: whether the influence of the initial failures in α remains [quenched (case Q)] or not [free (case F)]; the effect of intralayer degree-degree correlations in each layer and interlayer degree-degree correlations; and the type of the initial failures, such as random failures or targeted attacks (TAs). We illustrate that the percolation processes repeat in both cases Q and F, although only in case F are nodes that initially failed reactivated. To analytically evaluate the resilience of each layer, we develop a methodology based on the cavity method for deriving the size of a giant component (GC). Strong hysteresis, which is ignored in the standard cavity analysis, is observed in the repetition of the percolation processes particularly in case F. To handle this, we heuristically modify interlayer messages for macroscopic analysis, the utility of which is verified by numerical experiments. The percolation transition in each layer is continuous in both cases Q and F. We also analyze the influences of degree-degree correlations on the robustness of layer α, in particular for the case of TAs. The analysis indicates that the critical fraction of initial failures that makes the GC size in layer α vanish depends only on its intralayer degree-degree correlations. Although our model is defined in a somewhat abstract manner, it may have relevance to ecological systems that are composed of endangered species (layer α) and invaders (layer ß), the former of which are damaged by the latter whereas the latter are exterminated in the areas where the former are active.

11.
Phys Rev E ; 94(2-1): 022137, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27627276

RESUMO

In this paper, we explore the possibilities and limitations of recovering sparse signals in an online fashion. Employing a mean field approximation to the Bayes recursion formula yields an online signal recovery algorithm that can be performed with a computational cost that is linearly proportional to the signal length per update. Analysis of the resulting algorithm indicates that the online algorithm asymptotically saturates the optimal performance limit achieved by the offline method in the presence of Gaussian measurement noise, while differences in the allowable computational costs may result in fundamental gaps of the achievable performance in the absence of noise.

12.
Phys Rev E ; 94(2-1): 022312, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27627322

RESUMO

We investigate the replicator dynamics with "sparse" symmetric interactions which represent specialist-specialist interactions in ecological communities. By considering a large self-interaction u, we conduct a perturbative expansion which manifests that the nature of the interactions has a direct impact on the species abundance distribution. The central results are all species coexistence in a realistic range of the model parameters and that a certain discrete nature of the interactions induces multiple peaks in the species abundance distribution, providing the possibility of theoretically explaining multiple peaks observed in various field studies. To get more quantitative information, we also construct a non-perturbative theory which becomes exact on tree-like networks if all the species coexist, providing exact critical values of u below which extinct species emerge. Numerical simulations in various different situations are conducted and they clarify the robustness of the presented mechanism of all species coexistence and multiple peaks in the species abundance distributions.

13.
Artigo em Inglês | MEDLINE | ID: mdl-26172750

RESUMO

Investigating the performance of different methods is a fundamental problem in graph partitioning. In this paper, we estimate the so-called detectability threshold for the spectral method with both un-normalized and normalized Laplacians in sparse graphs. The detectability threshold is the critical point at which the result of the spectral method is completely uncorrelated to the planted partition. We also analyze whether the localization of eigenvectors affects the partitioning performance in the detectable region. We use the replica method, which is often used in the field of spin-glass theory, and focus on the case of bisection. We show that the gap between the estimated threshold for the spectral method and the threshold obtained from Bayesian inference is considerable in sparse graphs, even without eigenvector localization. This gap closes in a dense limit.

14.
Phys Rev E Stat Nonlin Soft Matter Phys ; 90(5-1): 052813, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25493840

RESUMO

Through supervised learning in a binary perceptron one is able to classify an extensive number of random patterns by a proper assignment of binary synaptic weights. However, to find such assignments in practice is quite a nontrivial task. The relation between the weight space structure and the algorithmic hardness has not yet been fully understood. To this end, we analytically derive the Franz-Parisi potential for the binary perceptron problem by starting from an equilibrium solution of weights and exploring the weight space structure around it. Our result reveals the geometrical organization of the weight space; the weight space is composed of isolated solutions, rather than clusters of exponentially many close-by solutions. The pointlike clusters far apart from each other in the weight space explain the previously observed glassy behavior of stochastic local search heuristics.

15.
Artigo em Inglês | MEDLINE | ID: mdl-24580282

RESUMO

We develop a methodology for analyzing the percolation phenomena of two mutually coupled (interdependent) networks based on the cavity method of statistical mechanics. In particular, we take into account the influence of degree-degree correlations inside and between the networks on the network robustness against targeted (random degree-dependent) attacks and random failures. We show that the developed methodology is reduced to the well-known generating function formalism in the absence of degree-degree correlations. The validity of the developed methodology is confirmed by a comparison with the results of numerical experiments. Our analytical results indicate that the robustness of the interdependent networks depends on both the intranetwork and internetwork degree-degree correlations in a nontrivial way for both cases of random failures and targeted attacks.

16.
Artigo em Inglês | MEDLINE | ID: mdl-23848649

RESUMO

The adaptive Thouless-Anderson-Palmer equation is derived for inverse Ising problems in the presence of quenched random fields. We test the proposed scheme on Sherrington-Kirkpatrick, Hopfield, and random orthogonal models and find that the adaptive Thouless-Anderson-Palmer approach allows accurate inference of quenched random fields whose distribution can be either Gaussian or bimodal. In particular, another competitive method for inferring external fields, namely, the naive mean field method with diagonal weights, is compared and discussed.

17.
Phys Rev E Stat Nonlin Soft Matter Phys ; 82(3 Pt 2): 036101, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21230133

RESUMO

We developed a scheme for evaluating the size of the largest connected subnetwork (giant component) in random networks and the percolation threshold when sites (nodes) and/or bonds (edges) are removed from the networks based on the cavity method of statistical mechanics of disordered systems. We apply our scheme particularly to random networks of bimodal degree distribution (two-peak networks), which have been proposed in earlier studies as robust networks against random failures of site and/or targeted (random degree-dependent) attacks on sites. Our analysis indicates that the correlations among degrees affect a network's robustness against targeted attacks on sites or bonds nontrivially depending on details of network configurations.

18.
Phys Rev E Stat Nonlin Soft Matter Phys ; 82(6 Pt 1): 060101, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21230631

RESUMO

The emerging popular scheme of fourth generation wireless communication, orthogonal frequency-division multiplexing, is mapped onto a variant of a random field Ising Hamiltonian and results in an efficient physical intercarrier interference (ICI) cancellation decoding scheme. This scheme is based on Monte Carlo (MC) dynamics at zero temperature as well as at the Nishimori temperature and demonstrates improved bit error rate (BER) and robust convergence time compared to the state of the art ICI cancellation decoding scheme. An optimal BER performance is achieved with MC dynamics at the Nishimori temperature but with a substantial computational cost overhead. The suggested ICI cancellation scheme also supports the transmission of biased signals.

19.
Phys Rev E Stat Nonlin Soft Matter Phys ; 80(6 Pt 1): 061124, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20365135

RESUMO

The Kronecker channel model of wireless communication is analyzed using statistical mechanics methods. In the model, spatial proximities among transmission/reception antennas are taken into account as certain correlation matrices, which generally yield nontrivial dependence among symbols to be estimated. This prevents accurate assessment of the communication performance by naively using a previously developed analytical scheme based on a matrix integration formula. In order to resolve this difficulty, we develop a formalism that can formally handle the correlations in Kronecker models based on the known scheme. Unfortunately, direct application of the developed scheme is, in general, practically difficult. However, the formalism is still useful, indicating that the effect of the correlations generally increase after the fourth order with respect to correlation strength. Therefore, the known analytical scheme offers a good approximation in performance evaluation when the correlation strength is sufficiently small. For a class of specific correlation, we show that the performance analysis can be mapped to the problem of one-dimensional spin systems in random fields, which can be investigated without approximation by the belief propagation algorithm.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Modelos Estatísticos , Processamento de Sinais Assistido por Computador , Telecomunicações , Simulação por Computador
20.
Phys Rev E Stat Nonlin Soft Matter Phys ; 67(3 Pt 2): 036703, 2003 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-12689198

RESUMO

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...