Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Artículo en Inglés | MEDLINE | ID: mdl-34312253

RESUMEN

Contact tracing is an essential tool to mitigate the impact of a pandemic, such as the COVID-19 pandemic. In order to achieve efficient and scalable contact tracing in real time, digital devices can play an important role. While a lot of attention has been paid to analyzing the privacy and ethical risks of the associated mobile applications, so far much less research has been devoted to optimizing their performance and assessing their impact on the mitigation of the epidemic. We develop Bayesian inference methods to estimate the risk that an individual is infected. This inference is based on the list of his recent contacts and their own risk levels, as well as personal information such as results of tests or presence of syndromes. We propose to use probabilistic risk estimation to optimize testing and quarantining strategies for the control of an epidemic. Our results show that in some range of epidemic spreading (typically when the manual tracing of all contacts of infected people becomes practically impossible but before the fraction of infected people reaches the scale where a lockdown becomes unavoidable), this inference of individuals at risk could be an efficient way to mitigate the epidemic. Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact. Such communication may be encrypted and anonymized, and thus, it is compatible with privacy-preserving standards. We conclude that probabilistic risk estimation is capable of enhancing the performance of digital contact tracing and should be considered in the mobile applications.


Asunto(s)
Trazado de Contacto/métodos , Epidemias/prevención & control , Algoritmos , Teorema de Bayes , COVID-19/epidemiología , COVID-19/prevención & control , Trazado de Contacto/estadística & datos numéricos , Humanos , Aplicaciones Móviles , Privacidad , Medición de Riesgo , SARS-CoV-2
2.
Phys Rev Lett ; 126(8): 088101, 2021 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-33709726

RESUMEN

We introduce a simple physical picture to explain the process of molecular sorting, whereby specific proteins are concentrated and distilled into submicrometric lipid vesicles in eukaryotic cells. To this purpose, we formulate a model based on the coupling of spontaneous molecular aggregation with vesicle nucleation. Its implications are studied by means of a phenomenological theory describing the diffusion of molecules toward multiple sorting centers that grow due to molecule absorption and are extracted when they reach a sufficiently large size. The predictions of the theory are compared with numerical simulations of a lattice-gas realization of the model and with experimental observations. The efficiency of the distillation process is found to be optimal for intermediate aggregation rates, where the density of sorted molecules is minimal and the process obeys simple scaling laws. Quantitative measures of endocytic sorting performed in primary endothelial cells are compatible with the hypothesis that these optimal conditions are realized in living cells.


Asunto(s)
Células Eucariotas/metabolismo , Lípidos de la Membrana/metabolismo , Modelos Biológicos , Proteínas/metabolismo , Difusión , Vesículas Transportadoras/metabolismo
3.
Phys Rev Lett ; 123(2): 020604, 2019 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-31386499

RESUMEN

Computing marginal distributions of discrete or semidiscrete Markov random fields (MRFs) is a fundamental, generally intractable problem with a vast number of applications in virtually all fields of science. We present a new family of computational schemes to approximately calculate the marginals of discrete MRFs. This method shares some desirable properties with belief propagation, in particular, providing exact marginals on acyclic graphs, but it differs with the latter in that it includes some loop corrections; i.e., it takes into account correlations coming from all cycles in the factor graph. It is also similar to the adaptive Thouless-Anderson-Palmer method, but it differs with the latter in that the consistency is not on the first two moments of the distribution but rather on the value of its density on a subset of values. The results on finite-dimensional Isinglike models show a significant improvement with respect to the Bethe-Peierls (tree) approximation in all cases and with respect to the plaquette cluster variational method approximation in many cases. In particular, for the critical inverse temperature ß_{c} of the homogeneous hypercubic lattice, the expansion of (dß_{c})^{-1} around d=∞ of the proposed scheme is exact up to d^{-4} order, whereas the latter two are exact only up to d^{-2} order.

4.
Proc Natl Acad Sci U S A ; 113(44): 12368-12373, 2016 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-27791075

RESUMEN

We study the network dismantling problem, which consists of determining a minimal set of vertices in which removal leaves the network broken into connected components of subextensive size. For a large class of random graphs, this problem is tightly connected to the decycling problem (the removal of vertices, leaving the graph acyclic). Exploiting this connection and recent works on epidemic spreading, we present precise predictions for the minimal size of a dismantling set in a large random graph with a prescribed (light-tailed) degree distribution. Building on the statistical mechanics perspective, we propose a three-stage Min-Sum algorithm for efficiently dismantling networks, including heavy-tailed ones for which the dismantling and decycling problems are not equivalent. We also provide additional insights into the dismantling problem, concluding that it is an intrinsically collective problem and that optimal dismantling sets cannot be viewed as a collection of individually well-performing nodes.

5.
Proc Natl Acad Sci U S A ; 109(12): 4395-400, 2012 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-22383559

RESUMEN

The very notion of social network implies that linked individuals interact repeatedly with each other. This notion allows them not only to learn successful strategies and adapt to them, but also to condition their own behavior on the behavior of others, in a strategic forward looking manner. Game theory of repeated games shows that these circumstances are conducive to the emergence of collaboration in simple games of two players. We investigate the extension of this concept to the case where players are engaged in a local contribution game and show that rationality and credibility of threats identify a class of Nash equilibria--that we call "collaborative equilibria"--that have a precise interpretation in terms of subgraphs of the social network. For large network games, the number of such equilibria is exponentially large in the number of players. When incentives to defect are small, equilibria are supported by local structures whereas when incentives exceed a threshold they acquire a nonlocal nature, which requires a "critical mass" of more than a given fraction of the players to collaborate. Therefore, when incentives are high, an individual deviation typically causes the collapse of collaboration across the whole system. At the same time, higher incentives to defect typically support equilibria with a higher density of collaborators. The resulting picture conforms with several results in sociology and in the experimental literature on game theory, such as the prevalence of collaboration in denser groups and in the structural hubs of sparse networks.


Asunto(s)
Apoyo Social , Algoritmos , Comunicación , Conducta Cooperativa , Teoría del Juego , Humanos , Modelos Psicológicos , Modelos Estadísticos , Modelos Teóricos
6.
Phys Rev Lett ; 113(7): 078701, 2014 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-25170736

RESUMEN

The problem of controllability of the dynamical state of a network is central in network theory and has wide applications ranging from network medicine to financial markets. The driver nodes of the network are the nodes that can bring the network to the desired dynamical state if an external signal is applied to them. Using the framework of structural controllability, here, we show that the density of nodes with in degree and out degree equal to one and two determines the number of driver nodes in the network. Moreover, we show that random networks with minimum in degree and out degree greater than two, are always fully controllable by an infinitesimal fraction of driver nodes, regardless of the other properties of the degree distribution. Finally, based on these results, we propose an algorithm to improve the controllability of networks.


Asunto(s)
Algoritmos , Modelos Teóricos , Simulación por Computador
7.
Phys Rev Lett ; 112(14): 148101, 2014 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-24766019

RESUMEN

The influence of migration on the stochastic dynamics of subdivided populations is still an open issue in various evolutionary models. Here, we develop a self-consistent mean-field-like method in order to determine the effects of migration on relevant nonequilibrium properties, such as the mean fixation time. If evolution strongly favors coexistence of species (e.g., balancing selection), the mean fixation time develops an unexpected minimum as a function of the migration rate. Our analysis hinges only on the presence of a separation of time scales between local and global dynamics, and therefore, it carries over to other nonequilibrium processes in physics, biology, ecology, and social sciences.


Asunto(s)
Ecosistema , Genética de Población/métodos , Modelos Genéticos , Migración Animal , Evolución Biológica , Conducta Competitiva , Dinámica Poblacional
8.
Phys Rev Lett ; 112(11): 118701, 2014 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-24702425

RESUMEN

We study several Bayesian inference problems for irreversible stochastic epidemic models on networks from a statistical physics viewpoint. We derive equations which allow us to accurately compute the posterior distribution of the time evolution of the state of each node given some observations. At difference with most existing methods, we allow very general observation models, including unobserved nodes, state observations made at different or unknown times, and observations of infection times, possibly mixed together. Our method, which is based on the belief propagation algorithm, is efficient, naturally distributed, and exact on trees. As a particular case, we consider the problem of finding the "zero patient" of a susceptible-infected-recovered or susceptible-infected epidemic given a snapshot of the state of the network at a later unknown time. Numerical simulations show that our method outperforms previous ones on both synthetic and real networks, often by a very large margin.


Asunto(s)
Teorema de Bayes , Trazado de Contacto/métodos , Métodos Epidemiológicos , Modelos Estadísticos , Procesos Estocásticos
9.
Phys Rev E ; 108(2-1): 024401, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37723769

RESUMEN

Eukaryotic cells maintain their inner order by a hectic process of sorting and distillation of molecular factors taking place on their lipid membranes. A similar sorting process is implied in the assembly and budding of enveloped viruses. To understand the properties of this molecular sorting process, we have recently proposed a physical model [Zamparo et al., Phys. Rev. Lett. 126, 088101 (2021)]10.1103/PhysRevLett.126.088101, based on (1) the phase separation of a single, initially dispersed molecular species into spatially localized sorting domains on the lipid membrane and (2) domain-induced membrane bending leading to the nucleation of submicrometric lipid vesicles, naturally enriched in the molecules of the engulfed sorting domain. The analysis of the model showed the existence of an optimal region of parameter space where sorting is most efficient. Here the model is extended to account for the simultaneous distillation of a pool of distinct molecular species. We find that the mean time spent by sorted molecules on the membrane increases with the heterogeneity of the pool (i.e., the number of distinct molecular species sorted) according to a simple scaling law, and that a large number of distinct molecular species can in principle be sorted in parallel on cell membranes without significantly interfering with each other. Moreover, sorting is found to be most efficient when the distinct molecular species have comparable homotypic affinities. We also consider how valence (i.e., the average number of interacting neighbors of a molecule in a sorting domain) affects the sorting process, finding that higher-valence molecules can be sorted with greater efficiency than lower-valence molecules.


Asunto(s)
Lípidos , Membrana Celular , División Celular , Movimiento Celular
10.
Sci Rep ; 13(1): 7350, 2023 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-37147382

RESUMEN

Estimating observables from conditioned dynamics is typically computationally hard. While obtaining independent samples efficiently from unconditioned dynamics is usually feasible, most of them do not satisfy the imposed conditions and must be discarded. On the other hand, conditioning breaks the causal properties of the dynamics, which ultimately renders the sampling of the conditioned dynamics non-trivial and inefficient. In this work, a Causal Variational Approach is proposed, as an approximate method to generate independent samples from a conditioned distribution. The procedure relies on learning the parameters of a generalized dynamical model that optimally describes the conditioned distribution in a variational sense. The outcome is an effective and unconditioned dynamical model from which one can trivially obtain independent samples, effectively restoring the causality of the conditioned dynamics. The consequences are twofold: the method allows one to efficiently compute observables from the conditioned dynamics by averaging over independent samples; moreover, it provides an effective unconditioned distribution that is easy to interpret. This approximation can be applied virtually to any dynamics. The application of the method to epidemic inference is discussed in detail. The results of direct comparison with state-of-the-art inference methods, including the soft-margin approach and mean-field methods, are promising.

11.
Sci Rep ; 12(1): 19673, 2022 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-36385141

RESUMEN

The reconstruction of missing information in epidemic spreading on contact networks can be essential in the prevention and containment strategies. The identification and warning of infectious but asymptomatic individuals (i.e., contact tracing), the well-known patient-zero problem, or the inference of the infectivity values in structured populations are examples of significant epidemic inference problems. As the number of possible epidemic cascades grows exponentially with the number of individuals involved and only an almost negligible subset of them is compatible with the observations (e.g., medical tests), epidemic inference in contact networks poses incredible computational challenges. We present a new generative neural networks framework that learns to generate the most probable infection cascades compatible with observations. The proposed method achieves better (in some cases, significantly better) or comparable results with existing methods in all problems considered both in synthetic and real contact networks. Given its generality, clear Bayesian and variational nature, the presented framework paves the way to solve fundamental inference epidemic problems with high precision in small and medium-sized real case scenarios such as the spread of infections in workplaces and hospitals.


Asunto(s)
Epidemias , Humanos , Teorema de Bayes , Epidemias/prevención & control , Trazado de Contacto , Redes Neurales de la Computación
12.
Phys Rev E ; 106(4-1): 044412, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36397477

RESUMEN

Molecular sorting is a fundamental process that allows eukaryotic cells to distill and concentrate specific chemical factors in appropriate cell membrane subregions, thus endowing them with different chemical identities and functional properties. A phenomenological theory of this molecular distillation process has recently been proposed [M. Zamparo, D. Valdembri, G. Serini, I. V. Kolokolov, V. V. Lebedev, L. Dall'Asta, and A. Gamba, Phys. Rev. Lett. 126, 088101 (2021)0031-900710.1103/PhysRevLett.126.088101], based on the idea that molecular sorting emerges from the combination of (a) phase separation driven formation of sorting domains and (b) domain-induced membrane bending, leading to the production of submicrometric lipid vesicles enriched in the sorted molecules. In this framework, a natural parameter controlling the efficiency of molecular distillation is the critical size of phase separated domains. In the experiments, sorting domains appear to fall into two classes: unproductive domains, characterized by short lifetimes and low probability of extraction, and productive domains, that evolve into vesicles that ultimately detach from the membrane system. It is tempting to link these two classes to the different fates predicted by classical phase separation theory for subcritical and supercritical phase separated domains. Here, we discuss the implication of this picture in the framework of the previously introduced phenomenological theory of molecular sorting. Several predictions of the theory are verified by numerical simulations of a lattice-gas model. Sorting is observed to be most efficient when the number of sorting domains is close to a minimum. To help in the analysis of experimental data, an operational definition of the critical size of sorting domains is proposed. Comparison with experimental results shows that the statistical properties of productive and unproductive domains inferred from experimental data are in agreement with those predicted from numerical simulations of the model, compatibly with the hypothesis that molecular sorting is driven by a phase separation process.

13.
Comput Struct Biotechnol J ; 19: 3225-3233, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34141141

RESUMEN

Compartmentalization of cellular functions is at the core of the physiology of eukaryotic cells. Recent evidences indicate that a universal organizing process - phase separation - supports the partitioning of biomolecules in distinct phases from a single homogeneous mixture, a landmark event in both the biogenesis and the maintenance of membrane and non-membrane-bound organelles. In the cell, 'passive' (non energy-consuming) mechanisms are flanked by 'active' mechanisms of separation into phases of distinct density and stoichiometry, that allow for increased partitioning flexibility and programmability. A convergence of physical and biological approaches is leading to new insights into the inner functioning of this driver of intracellular order, holding promises for future advances in both biological research and biotechnological applications.

14.
Phys Rev Lett ; 104(21): 218701, 2010 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-20867147

RESUMEN

An anomalous mean-field solution is known to capture the nontrivial phase diagram of the Ising model in annealed complex networks. Nevertheless, the critical fluctuations in random complex networks remain mean field. Here we show that a breakdown of this scenario can be obtained when complex networks are embedded in geometrical spaces. Through the analysis of the Ising model on annealed spatial networks, we reveal, in particular, the spectral properties of networks responsible for critical fluctuations and we generalize the Ginsburg criterion to complex topologies.


Asunto(s)
Modelos Teóricos , Transición de Fase
15.
Phys Rev E Stat Nonlin Soft Matter Phys ; 79(1 Pt 2): 015101, 2009 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19257095

RESUMEN

We define a minimal model of traffic flows in complex networks in order to study the trade-off between topological-based and traffic-based routing strategies. The resulting collective behavior is obtained analytically for an ensemble of uncorrelated networks and summarized in a rich phase diagram presenting second-order as well as first-order phase transitions between a free-flow phase and a congested phase. We find that traffic control improves global performance, enlarging the free-flow region in parameter space only in heterogeneous networks. Traffic control introduces nonlinear effects and, beyond a critical strength, may trigger the appearance of a congested phase in a discontinuous manner. The model also reproduces the crossover in the scaling of traffic fluctuations empirically observed on the Internet.

16.
Phys Rev E Stat Nonlin Soft Matter Phys ; 76(5 Pt 1): 051102, 2007 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-18233618

RESUMEN

We introduce a model of negotiation dynamics whose aim is that of mimicking the mechanisms leading to opinion and convention formation in a population of individuals. The negotiation process, as opposed to "herdinglike" or "bounded confidence" driven processes, is based on a microscopic dynamics where memory and feedback play a central role. Our model displays a nonequilibrium phase transition from an absorbing state in which all agents reach a consensus to an active stationary state characterized either by polarization or fragmentation in clusters of agents with different opinions. We show the existence of at least two different universality classes, one for the case with two possible opinions and one for the case with an unlimited number of opinions. The phase transition is studied analytically and numerically for various topologies of the agents' interaction network. In both cases the universality classes do not seem to depend on the specific interaction topology, the only relevant feature being the total number of different opinions ever present in the system.

17.
Phys Rev E Stat Nonlin Soft Matter Phys ; 75(5 Pt 2): 056111, 2007 May.
Artículo en Inglés | MEDLINE | ID: mdl-17677137

RESUMEN

Most data concerning the topology of complex networks are the result of mapping projects which bear intrinsic limitations and cannot give access to complete, unbiased datasets. A particularly interesting case is represented by the physical Internet. Router-level Internet mapping projects generally consist of sampling the network from a limited set of sources by using traceroute probes. This methodology, akin to the merging of spanning trees from the different sources to a set of destinations, leads necessarily to a partial, incomplete map of the Internet. The determination of the real Internet topology characteristics from such sampled maps is therefore, in part, a problem of statistical inference. In this paper we present a twofold contribution in order to address this problem. First, we argue that inference of some of the standard topological quantities is, in fact, a version of the so-called "species" problem in statistics, which is important in categorizing the problem and providing some indication of its inherent difficulties. Second, we tackle the issue of estimating arguably the most basic of network characteristics-its number of nodes-and propose two estimators for this quantity, based on subsampling principles. Numerical simulations, as well as an experiment based on probing the Internet, suggest the feasibility of accounting for measurement bias in reporting Internet topology characteristics.

18.
PLoS One ; 12(4): e0176376, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28445537

RESUMEN

The massive employment of computational models in network epidemiology calls for the development of improved inference methods for epidemic forecast. For simple compartment models, such as the Susceptible-Infected-Recovered model, Belief Propagation was proved to be a reliable and efficient method to identify the origin of an observed epidemics. Here we show that the same method can be applied to predict the future evolution of an epidemic outbreak from partial observations at the early stage of the dynamics. The results obtained using Belief Propagation are compared with Monte Carlo direct sampling in the case of SIR model on random (regular and power-law) graphs for different observation methods and on an example of real-world contact network. Belief Propagation gives in general a better prediction that direct sampling, although the quality of the prediction depends on the quantity under study (e.g. marginals of individual states, epidemic size, extinction-time distribution) and on the actual number of observed nodes that are infected before the observation time.


Asunto(s)
Modelos Teóricos , Área Bajo la Curva , Teorema de Bayes , Enfermedades Transmisibles/epidemiología , Epidemias , Humanos , Método de Montecarlo , Curva ROC
19.
PLoS One ; 12(5): e0176859, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28475583

RESUMEN

BACKGROUND AND AIM: Sarcoidosis is a systemic granulomatous inflammatory disease whose causes are still unknown and for which epidemiological data are often discordant. The aim of our study is to investigate prevalence and spatial distribution of cases, and identify environmental exposures associated with sarcoidosis in an Italian province. METHODS: After georeferentiation of cases, the area under study was subdivided with respect to Municipality and Health Districts and to the altitude in order to identify zonal differences in prevalence. The bioaccumulation levels of 12 metals in lichen tissues were analyzed, in order to determine sources of air pollution. Finally, the analysis of the correlation between metals and between pickup stations was performed. RESULTS: 223 patients were identified (58.3% female and 41.7% male of total) and the mean age was 50.6±15.4 years (53.5±15.5 years for the females and 46.5±14.4 for the males). The mean prevalence was 49 per 100.000 individuals. However, we observed very heterogeneous prevalence in the area under study. The correlations among metals revealed different deposition patterns in lowland area respect to hilly and mountain areas. CONCLUSIONS: The study highlights a high prevalence of sarcoidosis cases, characterized by a very inhomogeneous and patchy distribution with phenomena of local aggregation. Moreover, the bioaccumulation analysis was an effective method to identify the mineral particles that mostly contribute to air pollution in the different areas, but it was not sufficient to establish a clear correlation between the onset of sarcoidosis and environmental risk factors.


Asunto(s)
Sarcoidosis/epidemiología , Adulto , Anciano , Monitoreo del Ambiente , Femenino , Sistemas de Información Geográfica , Humanos , Italia/epidemiología , Masculino , Persona de Mediana Edad , Prevalencia , Factores de Riesgo
20.
Phys Rev E Stat Nonlin Soft Matter Phys ; 74(3 Pt 2): 036105, 2006 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-17025706

RESUMEN

The naming game is a model of nonequilibrium dynamics for the self-organized emergence of a linguistic convention or a communication system in a population of agents with pairwise local interactions. We present an extensive study of its dynamics on complex networks, that can be considered as the most natural topological embedding for agents involved in language games and opinion dynamics. Except for some community structured networks on which metastable phases can be observed, agents playing the naming game always manage to reach a global consensus. This convergence is obtained after a time generically scaling with the population's size N as t(conv) approximately N(1.4+/-0.1), i.e., much faster than for agents embedded on regular lattices. Moreover, the memory capacity required by the system scales only linearly with its size. Particular attention is given to heterogeneous networks, in which the dynamical activity pattern of a node depends on its degree. High-degree nodes have a fundamental role, but require larger memory capacity. They govern the dynamics acting as spreaders of (linguistic) conventions. The effects of other properties, such as the average degree and the clustering, are also discussed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA