RESUMEN
Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time autoregressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks, which can represent linearizations of nonlinear systems. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronization specifically in terms of the contribution of process motif (walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.
RESUMEN
Real-world networks are neither regular nor random, a fact elegantly explained by mechanisms such as the Watts-Strogatz or the Barabási-Albert models, among others. Both mechanisms naturally create shortcuts and hubs, which while enhancing the network's connectivity, also might yield several undesired navigational effects: They tend to be overused during geodesic navigational processes-making the networks fragile-and provide suboptimal routes for diffusive-like navigation. Why, then, networks with complex topologies are ubiquitous? Here, we unveil that these models also entropically generate network bypasses: alternative routes to shortest paths which are topologically longer but easier to navigate. We develop a mathematical theory that elucidates the emergence and consolidation of network bypasses and measure their navigability gain. We apply our theory to a wide range of real-world networks and find that they sustain complexity by different amounts of network bypasses. At the top of this complexity ranking we found the human brain, which points out the importance of these results to understand the plasticity of complex systems.
Asunto(s)
Encéfalo , Humanos , DifusiónRESUMEN
Complex networked systems often exhibit higher-order interactions, beyond dyadic interactions, which can dramatically alter their observed behavior. Consequently, understanding hypergraphs from a structural perspective has become increasingly important. Statistical, group-based inference approaches are well suited for unveiling the underlying community structure and predicting unobserved interactions. However, these approaches often rely on two key assumptions: that the same groups can explain hyperedges of any order and that interactions are assortative, meaning that edges are formed by nodes with the same group memberships. To test these assumptions, we propose a group-based generative model for hypergraphs that does not impose an assortative mechanism to explain observed higher-order interactions, unlike current approaches. Our model allows us to explore the validity of the assumptions. Our results indicate that the first assumption appears to hold true for real networks. However, the second assumption is not necessarily accurate; we find that a combination of general statistical mechanisms can explain observed hyperedges. Finally, with our approach, we are also able to determine the importance of lower and high-order interactions for predicting unobserved interactions. Our research challenges the conventional assumptions of group-based inference methodologies and broadens our understanding of the underlying structure of hypergraphs.
RESUMEN
Precisely how humans process relational patterns of information in knowledge, language, music, and society is not well understood. Prior work in the field of statistical learning has demonstrated that humans process such information by building internal models of the underlying network structure. However, these mental maps are often inaccurate due to limitations in human information processing. The existence of such limitations raises clear questions: Given a target network that one wishes for a human to learn, what network should one present to the human? Should one simply present the target network as-is, or should one emphasize certain parts of the network to proactively mitigate expected errors in learning? To investigate these questions, we study the optimization of network learnability in a computational model of human learning. Evaluating an array of synthetic and real-world networks, we find that learnability is enhanced by reinforcing connections within modules or clusters. In contrast, when networks contain significant core-periphery structure, we find that learnability is best optimized by reinforcing peripheral edges between low-degree nodes. Overall, our findings suggest that the accuracy of human network learning can be systematically enhanced by targeted emphasis and de-emphasis of prescribed sectors of information.
Asunto(s)
Simulación por Computador , Conocimiento , Aprendizaje , Modelos Psicológicos , Humanos , Lenguaje , Música , Refuerzo en PsicologíaRESUMEN
The quantitative understanding and precise control of complex dynamical systems can only be achieved by observing their internal states via measurement and/or estimation. In large-scale dynamical networks, it is often difficult or physically impossible to have enough sensor nodes to make the system fully observable. Even if the system is in principle observable, high dimensionality poses fundamental limits on the computational tractability and performance of a full-state observer. To overcome the curse of dimensionality, we instead require the system to be functionally observable, meaning that a targeted subset of state variables can be reconstructed from the available measurements. Here, we develop a graph-based theory of functional observability, which leads to highly scalable algorithms to 1) determine the minimal set of required sensors and 2) design the corresponding state observer of minimum order. Compared with the full-state observer, the proposed functional observer achieves the same estimation quality with substantially less sensing and fewer computational resources, making it suitable for large-scale networks. We apply the proposed methods to the detection of cyberattacks in power grids from limited phase measurement data and the inference of the prevalence rate of infection during an epidemic under limited testing conditions. The applications demonstrate that the functional observer can significantly scale up our ability to explore otherwise inaccessible dynamical processes on complex networks.
RESUMEN
Recent years have witnessed the detection of an increasing number of complex organic molecules in interstellar space, some of them being of prebiotic interest. Disentangling the origin of interstellar prebiotic chemistry and its connection to biochemistry and ultimately, to biology is an enormously challenging scientific goal where the application of complexity theory and network science has not been fully exploited. Encouraged by this idea, we present a theoretical and computational framework to model the evolution of simple networked structures toward complexity. In our environment, complex networks represent simplified chemical compounds and interact optimizing the dynamical importance of their nodes. We describe the emergence of a transition from simple networks toward complexity when the parameter representing the environment reaches a critical value. Notably, although our system does not attempt to model the rules of real chemistry nor is dependent on external input data, the results describe the emergence of complexity in the evolution of chemical diversity in the interstellar medium. Furthermore, they reveal an as yet unknown relationship between the abundances of molecules in dark clouds and the potential number of chemical reactions that yield them as products, supporting the ability of the conceptual framework presented here to shed light on real scenarios. Our work reinforces the notion that some of the properties that condition the extremely complex journey from the chemistry in space to prebiotic chemistry and finally, to life could show relatively simple and universal patterns.
Asunto(s)
Medio Ambiente Extraterrestre , Origen de la VidaRESUMEN
We investigate financial market dynamics by introducing a heterogeneous agent-based opinion formation model. In this work, we organize individuals in a financial market according to their trading strategy, namely, whether they are noise traders or fundamentalists. The opinion of a local majority compels the market exchanging behavior of noise traders, whereas the global behavior of the market influences the decisions of fundamentalist agents. We introduce a noise parameter, q, to represent the level of anxiety and perceived uncertainty regarding market behavior, enabling the possibility of adrift financial action. We place individuals as nodes in an Erdös-Rényi random graph, where the links represent their social interactions. At any given time, individuals assume one of two possible opinion states ±1 regarding buying or selling an asset. The model exhibits fundamental qualitative and quantitative real-world market features such as the distribution of logarithmic returns with fat tails, clustered volatility, and the long-term correlation of returns. We use Student's t distributions to fit the histograms of logarithmic returns, showing a gradual shift from a leptokurtic to a mesokurtic regime depending on the fraction of fundamentalist agents. Furthermore, we compare our results with those concerning the distribution of the logarithmic returns of several real-world financial indices.
Asunto(s)
Trastornos de Ansiedad , Ansiedad , Humanos , Interacción SocialRESUMEN
The ability to control network dynamics is essential for ensuring desirable functionality of many technological, biological, and social systems. Such systems often consist of a large number of network elements, and controlling large-scale networks remains challenging because the computation and communication requirements increase prohibitively fast with network size. Here, we introduce a notion of network locality that can be exploited to make the control of networks scalable, even when the dynamics are nonlinear. We show that network locality is captured by an information metric and is almost universally observed across real and model networks. In localized networks, the optimal control actions and system responses are both shown to be necessarily concentrated in small neighborhoods induced by the information metric. This allows us to develop localized algorithms for determining network controllability and optimizing the placement of driver nodes. This also allows us to develop a localized algorithm for designing local feedback controllers that approach the performance of the corresponding best global controllers, while incurring a computational cost orders-of-magnitude lower. We validate the locality, performance, and efficiency of the algorithms in Kuramoto oscillator networks, as well as three large empirical networks: synchronization dynamics in the Eastern US power grid, epidemic spreading mediated by the global air-transportation network, and Alzheimer's disease dynamics in a human brain network. Taken together, our results establish that large networks can be controlled with computation and communication costs comparable to those for small networks.
RESUMEN
It is a matter of debate whether a shrinking proportion of scholarly literature is getting most of the citations over time. It is also less well understood how a narrowing use of literature would affect the circulation of ideas in the sciences. Here, I show that the utilization of scientific literature follows dual tendencies over time: while a larger proportion of literature is cited at least a few times, citations are also concentrated more at the top of the citation distribution. Parallel to the latter trend, a paper's future importance increasingly depends on its past citation performance. A random network model shows that the citation concentration is directly related to the greater stability of citation performance. The presented evidence suggests that the growing heterogeneity of citation impact restricts the mobility of research articles that do not gain attention early on. While concentration grows from the beginning of the studied period in 1970, citation dispersion manifests itself significantly only from the mid-1990s, when the popularity of freshly published papers also increased. Most likely, advanced information technologies to disseminate papers are behind both of these latter trends.
Asunto(s)
Publicaciones , Comunicación Académica , Disentimientos y DisputasRESUMEN
Communication protocols in the brain connectome describe how to transfer information from one region to another. Typically, these protocols hinge on either the spatial distances between brain regions or the intensity of their connections. Yet, none of them combine both factors to achieve optimal efficiency. Here, we introduce a continuous spectrum of decentralized routing strategies that integrates link weights and the spatial embedding of connectomes to route signal transmission. We implemented the protocols on connectomes from individuals in two cohorts and on group-representative connectomes designed to capture weighted connectivity properties. We identified an intermediate domain of routing strategies, a sweet spot, where navigation achieves maximum communication efficiency at low transmission cost. This phenomenon is robust and independent of the particular configuration of weights. Our findings suggest an interplay between the intensity of neural connections and their topology and geometry that amplifies communicability, where weights play the role of noise in a stochastic resonance phenomenon. Such enhancement may support more effective responses to external and internal stimuli, underscoring the intricate diversity of brain functions.
Asunto(s)
Encéfalo , Conectoma , Humanos , Conectoma/métodos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , AdultoRESUMEN
Modern theories of phase transitions and scale invariance are rooted in path integral formulation and renormalization groups (RGs). Despite the applicability of these approaches in simple systems with only pairwise interactions, they are less effective in complex systems with undecomposable high-order interactions (i.e. interactions among arbitrary sets of units). To precisely characterize the universality of high-order interacting systems, we propose a simplex path integral and a simplex RG (SRG) as the generalizations of classic approaches to arbitrary high-order and heterogeneous interactions. We first formalize the trajectories of units governed by high-order interactions to define path integrals on corresponding simplices based on a high-order propagator. Then, we develop a method to integrate out short-range high-order interactions in the momentum space, accompanied by a coarse graining procedure functioning on the simplex structure generated by high-order interactions. The proposed SRG, equipped with a divide-and-conquer framework, can deal with the absence of ergodicity arising from the sparse distribution of high-order interactions and can renormalize a system with intertwined high-order interactions at thep-order according to its properties at theq-order (p⩽q). The associated scaling relation and its corollaries provide support to differentiate among scale-invariant, weakly scale-invariant, and scale-dependent systems across different orders. We validate our theory in multi-order scale-invariance verification, topological invariance discovery, organizational structure identification, and information bottleneck analysis. These experiments demonstrate the capability of our theory to identify intrinsic statistical and topological properties of high-order interacting systems during system reduction.
RESUMEN
MOTIVATION: Identifying disease-related genes is an important issue in computational biology. Module structure widely exists in biomolecule networks, and complex diseases are usually thought to be caused by perturbations of local neighborhoods in the networks, which can provide useful insights for the study of disease-related genes. However, the mining and effective utilization of the module structure is still challenging in such issues as a disease gene prediction. RESULTS: We propose a hybrid disease-gene prediction method integrating multiscale module structure (HyMM), which can utilize multiscale information from local to global structure to more effectively predict disease-related genes. HyMM extracts module partitions from local to global scales by multiscale modularity optimization with exponential sampling, and estimates the disease relatedness of genes in partitions by the abundance of disease-related genes within modules. Then, a probabilistic model for integration of gene rankings is designed in order to integrate multiple predictions derived from multiscale module partitions and network propagation, and a parameter estimation strategy based on functional information is proposed to further enhance HyMM's predictive power. By a series of experiments, we reveal the importance of module partitions at different scales, and verify the stable and good performance of HyMM compared with eight other state-of-the-arts and its further performance improvement derived from the parameter estimation. CONCLUSIONS: The results confirm that HyMM is an effective framework for integrating multiscale module structure to enhance the ability to predict disease-related genes, which may provide useful insights for the study of the multiscale module structure and its application in such issues as a disease-gene prediction.
Asunto(s)
Algoritmos , Biología Computacional , Biología Computacional/métodos , Modelos Estadísticos , ProteínasRESUMEN
Data makes the world go round-and high quality data is a prerequisite for precise models, especially for whole-cell models (WCM). Data for WCM must be reusable, contain information about the exact experimental background, and should-in its entirety-cover all relevant processes in the cell. Here, we review basic requirements to data for WCM and strategies how to combine them. As a species-specific resource, we introduce the Yeast Cell Model Data Base (YCMDB) to illustrate requirements and solutions. We discuss recent standards for data as well as for computational models including the modeling process as data to be reported. We outline strategies for constructions of WCM despite their inherent complexity.
Asunto(s)
Modelos Biológicos , Saccharomyces cerevisiae , Biología Computacional/métodos , Bases de Datos FactualesRESUMEN
Due to the complex interactions between multiple infectious diseases, the spreading of diseases in human bodies can vary when people are exposed to multiple sources of infection at the same time. Typically, there is heterogeneity in individuals' responses to diseases, and the transmission routes of different diseases also vary. Therefore, this paper proposes an SIS disease spreading model with individual heterogeneity and transmission route heterogeneity under the simultaneous action of two competitive infectious diseases. We derive the theoretical epidemic spreading threshold using quenched mean-field theory and perform numerical analysis under the Markovian method. Numerical results confirm the reliability of the theoretical threshold and show the inhibitory effect of the proportion of fully competitive individuals on epidemic spreading. The results also show that the diversity of disease transmission routes promotes disease spreading, and this effect gradually weakens when the epidemic spreading rate is high enough. Finally, we find a negative correlation between the theoretical spreading threshold and the average degree of the network. We demonstrate the practical application of the model by comparing simulation outputs to temporal trends of two competitive infectious diseases, COVID-19 and seasonal influenza in China.
Asunto(s)
COVID-19 , Simulación por Computador , Gripe Humana , Cadenas de Markov , Conceptos Matemáticos , Modelos Biológicos , SARS-CoV-2 , Humanos , COVID-19/transmisión , COVID-19/epidemiología , COVID-19/prevención & control , Gripe Humana/epidemiología , Gripe Humana/transmisión , China/epidemiología , Número Básico de Reproducción/estadística & datos numéricos , Modelos Epidemiológicos , Pandemias/estadística & datos numéricos , Pandemias/prevención & control , Epidemias/estadística & datos numéricosRESUMEN
Excess health and safety risks of commercial drivers are largely determined by, embedded in, or operate as complex, dynamic, and randomly determined systems with interacting parts. Yet, prevailing epidemiology is entrenched in narrow, deterministic, and static exposure-response frameworks along with ensuing inadequate data and limiting methods, thereby perpetuating an incomplete understanding of commercial drivers' health and safety risks. This paper is grounded in our ongoing research that conceptualizes health and safety challenges of working people as multilayered "wholes" of interacting work and nonwork factors, exemplified by complex-systems epistemologies. Building upon and expanding these assumptions, herein we: (a) discuss how insights from integrative exposome and network-science-based frameworks can enhance our understanding of commercial drivers' chronic disease and injury burden; (b) introduce the "working life exposome of commercial driving" (WLE-CD)-an array of multifactorial and interdependent work and nonwork exposures and associated biological responses that concurrently or sequentially impact commercial drivers' health and safety during and beyond their work tenure; (c) conceptualize commercial drivers' health and safety risks as multilayered networks centered on the WLE-CD and network relational patterns and topological properties-that is, arrangement, connections, and relationships among network components-that largely govern risk dynamics; and (d) elucidate how integrative exposome and network-science-based innovations can contribute to a more comprehensive understanding of commercial drivers' chronic disease and injury risk dynamics. Development, validation, and proliferation of this emerging discourse can move commercial driving epidemiology to the frontier of science with implications for policy, action, other working populations, and population health at large.
Asunto(s)
Conducción de Automóvil , Exposoma , Humanos , Exposición Profesional/efectos adversos , Conocimiento , Comercio , Salud Laboral , Enfermedades Profesionales/epidemiología , Enfermedades Profesionales/etiología , Enfermedad Crónica/epidemiologíaRESUMEN
The ability to map causal interactions underlying genetic control and cellular signaling has led to increasingly accurate models of the complex biochemical networks that regulate cellular function. These network models provide deep insights into the organization, dynamics, and function of biochemical systems: for example, by revealing genetic control pathways involved in disease. However, the traditional representation of biochemical networks as binary interaction graphs fails to accurately represent an important dynamical feature of these multivariate systems: some pathways propagate control signals much more effectively than do others. Such heterogeneity of interactions reflects canalization-the system is robust to dynamical interventions in redundant pathways but responsive to interventions in effective pathways. Here, we introduce the effective graph, a weighted graph that captures the nonlinear logical redundancy present in biochemical network regulation, signaling, and control. Using 78 experimentally validated models derived from systems biology, we demonstrate that 1) redundant pathways are prevalent in biological models of biochemical regulation, 2) the effective graph provides a probabilistic but precise characterization of multivariate dynamics in a causal graph form, and 3) the effective graph provides an accurate explanation of how dynamical perturbation and control signals, such as those induced by cancer drug therapies, propagate in biochemical pathways. Overall, our results indicate that the effective graph provides an enriched description of the structure and dynamics of networked multivariate causal interactions. We demonstrate that it improves explainability, prediction, and control of complex dynamical systems in general and biochemical regulation in particular.
Asunto(s)
Fenómenos Biológicos , Modelos Biológicos , Programas Informáticos , Redes Reguladoras de Genes , Redes y Vías Metabólicas , Transducción de SeñalRESUMEN
Nursing homes and other long-term care facilities account for a disproportionate share of COVID-19 cases and fatalities worldwide. Outbreaks in US nursing homes have persisted despite nationwide visitor restrictions beginning in mid-March. An early report issued by the Centers for Disease Control and Prevention identified staff members working in multiple nursing homes as a likely source of spread from the Life Care Center in Kirkland, WA, to other skilled nursing facilities. The full extent of staff connections between nursing homes-and the role these connections serve in spreading a highly contagious respiratory infection-is currently unknown given the lack of centralized data on cross-facility employment. We perform a large-scale analysis of nursing home connections via shared staff and contractors using device-level geolocation data from 50 million smartphones, and find that 5.1% of smartphone users who visited a nursing home for at least 1 h also visited another facility during our 11-wk study period-even after visitor restrictions were imposed. We construct network measures of connectedness and estimate that nursing homes, on average, share connections with 7.1 other facilities. Traditional federal regulatory metrics of nursing home quality are unimportant in predicting outbreaks, consistent with recent research. Controlling for demographic and other factors, a home's staff network connections and its centrality within the greater network strongly predict COVID-19 cases.
Asunto(s)
COVID-19/epidemiología , Casas de Salud , Pandemias , SARS-CoV-2/patogenicidad , COVID-19/prevención & control , COVID-19/virología , Brotes de Enfermedades , Femenino , Humanos , Masculino , Instituciones de Cuidados Especializados de Enfermería , Teléfono Inteligente , Análisis de Redes Sociales , Red SocialRESUMEN
Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate those networks under marked resource constraints. Networks that survive are those that are amenable to constrained encoding-or, in other words, are compressible. But how compressible is a network? And what features make one network more compressible than another? Here, we answer these questions by modeling networks as information sources before compressing them using rate-distortion theory. Each network yields a unique rate-distortion curve, which specifies the minimal amount of information that remains at a given scale of description. A natural definition then emerges for the compressibility of a network: the amount of information that can be removed via compression, averaged across all scales. Analyzing an array of real and model networks, we demonstrate that compressibility increases with two common network properties: transitivity (or clustering) and degree heterogeneity. These results indicate that hierarchical organization-which is characterized by modular structure and heterogeneous degrees-facilitates compression in complex networks. Generally, our framework sheds light on the interplay between a network's structure and its capacity to be compressed, enabling investigations into the role of compression in shaping real-world networks.
Asunto(s)
Redes de Comunicación de Computadores , Compresión de Datos , Modelos Teóricos , Algoritmos , Análisis por Conglomerados , Redes Comunitarias , Humanos , Distribución AleatoriaRESUMEN
Financial markets have undergone a deep reorganization during the last 20 y. A mixture of technological innovation and regulatory constraints has promoted the diffusion of market fragmentation and high-frequency trading. The new stock market has changed the traditional ecology of market participants and market professionals, and financial markets have evolved into complex sociotechnical institutions characterized by a great heterogeneity in the time scales of market members' interactions that cover more than eight orders of magnitude. We analyze three different datasets for two highly studied market venues recorded in 2004 to 2006, 2010 to 2011, and 2018. Using methods of complex network theory, we show that transactions between specific couples of market members are systematically and persistently overexpressed or underexpressed. Contemporary stock markets are therefore networked markets where liquidity provision of market members has statistically detectable preferences or avoidances with respect to some market members over time with a degree of persistence that can cover several months. We show a sizable increase in both the number and persistence of networked relationships between market members in most recent years and how technological and regulatory innovations affect the networked nature of the markets. Our study also shows that the portfolio of strategic trading decisions of high-frequency traders has evolved over the years, adding to the liquidity provision other market activities that consume market liquidity.
RESUMEN
One of the main lines of research in distributed learning in recent years is the one related to Federated Learning (FL). In this work, a decentralized Federated Learning algorithm based on consensus (CoL) is applied to Wireless Ad-hoc Networks (WANETs), where the agents communicate with other agents to share their learning model as they are available to the wireless connection range. When deploying a set of agents, it is essential to study whether all the WANET agents will be reachable before the deployment. The paper proposes to explore it by generating a simulation close to the real world using a framework (FIVE) that allows the easy development and modification of simulations based on Unity and SPADE agents. A fruit orchard with autonomous tractors is presented as a case study. The paper also presents how and why the concept of artifact has been included in the above-mentioned framework as a way to highlight the importance of some devices used in the environment that have to be located in specific places to ensure the full connection of the system. This inclusion is the first step to allow Digital Twins to be modeled with this framework, now allowing a Digital Shadow of those devices.