ABSTRACT
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.
Subject(s)
COVID-19 , COVID-19/mortality , Data Accuracy , Forecasting , Humans , Pandemics , Probability , Public Health/trends , United States/epidemiologyABSTRACT
Studies of collective motion have heretofore been dominated by a thermodynamic perspective in which the emergent "flocked" phases are analyzed in terms of their time-averaged orientational and spatial properties. Studies that attempt to scrutinize the dynamical processes that spontaneously drive the formation of these flocks from initially random configurations are far more rare, perhaps owing to the fact that said processes occur far from the eventual long-time steady state of the system and thus lie outside the scope of traditional statistical mechanics. For systems whose dynamics are simulated numerically, the nonstationary distribution of system configurations can be sampled at different time points, and the time evolution of the average structural properties of the system can be quantified. In this paper, we employ this strategy to characterize the spatial dynamics of the standard Vicsek flocking model using two correlation functions common to condensed matter physics. We demonstrate, for modest system sizes with 800 to 2000 agents, that the self-assembly dynamics can be characterized by three distinct and disparate time scales that we associate with the corresponding physical processes of clustering (compaction), relaxing (expansion), and mixing (rearrangement). We further show that the behavior of these correlation functions can be used to reliably distinguish between phenomenologically similar models with different underlying interactions and, in some cases, even provide a direct measurement of key model parameters.
ABSTRACT
Leader-follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available from a time series of animal trajectories collected in the field or from experiments. In this paper, we use a generalized leader-follower model to argue that the time-separated mutual information between two organism positions can serve as an alternative metric for capturing asymmetric correlations that is much less data intensive and more accurately estimated by popular k-nearest neighbor algorithms than transfer entropy. Our model predicts a local maximum of this mutual information at a time separation value corresponding to the fundamental reaction timescale of the follower organism. We confirm this prediction by analyzing time series trajectories recorded for a pair of golden shiner fish circling an annular tank.
ABSTRACT
Per- and polyfluoroalkyl substances (PFAS) are pervasive environmental contaminants, and their relative stability and high bioaccumulation potential create a challenging risk assessment problem. Zebrafish (Danio rerio) data, in principle, can be synthesized within a quantitative adverse outcome pathway (qAOP) framework to link molecular activity with individual or population level hazards. However, even as qAOP models are still in their infancy, there is a need to link internal dose and toxicity endpoints in a more rigorous way to further not only qAOP models but adverse outcome pathway frameworks in general. We address this problem by suggesting refinements to the current state of toxicokinetic modeling for the early development zebrafish exposed to PFAS up to 120 h post-fertilization. Our approach describes two key physiological transformation phenomena of the developing zebrafish: dynamic volume of an individual and dynamic hatching of a population. We then explore two different modeling strategies to describe the mass transfer, with one strategy relying on classical kinetic rates and the other incorporating mechanisms of membrane transport and adsorption/binding potential. Moving forward, we discuss the challenges of extending this model in both timeframe and chemical class, in conjunction with providing a conceptual framework for its integration with ongoing qAOP modeling efforts.
Subject(s)
Fluorocarbons , Water Pollutants, Chemical , Animals , Fluorocarbons/toxicity , Kinetics , Toxicokinetics , Water Pollutants, Chemical/metabolism , Water Pollutants, Chemical/toxicity , Zebrafish/metabolismABSTRACT
Network motifs, such as the feed-forward loop (FFL), introduce a range of complex behaviors to transcriptional regulatory networks, yet such properties are typically determined from their isolated study. We characterize the effects of crosstalk on FFL dynamics by modeling the cross regulation between two different FFLs and evaluate the extent to which these patterns occur in vivo. Analytical modeling suggests that crosstalk should overwhelmingly affect individual protein-expression dynamics. Counter to this expectation we find that entire FFLs are more likely than expected to resist the effects of crosstalk (≈20% for one crosstalk interaction) and remain dynamically modular. The likelihood that cross-linked FFLs are dynamically correlated increases monotonically with additional crosstalk, but is independent of the specific regulation type or connectivity of the interactions. Just one additional regulatory interaction is sufficient to drive the FFL dynamics to a statistically different state. Despite the potential for modularity between sparsely connected network motifs, Escherichia coli (E. coli) appears to favor crosstalk wherein at least one of the cross-linked FFLs remains modular. A gene ontology analysis reveals that stress response processes are significantly overrepresented in the cross-linked motifs found within E. coli. Although the daunting complexity of biological networks affects the dynamical properties of individual network motifs, some resist and remain modular, seemingly insulated from extrinsic perturbations-an intriguing possibility for nature to consistently and reliably provide certain network functionalities wherever the need arise.
Subject(s)
Gene Regulatory Networks , Models, Molecular , Algorithms , Escherichia coli/genetics , Escherichia coli/metabolism , Escherichia coli Proteins/genetics , Escherichia coli Proteins/metabolism , Gene Ontology , Markov Chains , Monte Carlo Method , Transcription Factors/genetics , Transcription Factors/metabolismABSTRACT
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose-response, and time-course predictions that can support regulatory decision-making. Herein we describe several facets of qAOPs, including (a) motivation for development, (b) technical considerations, (c) evaluation of confidence, and (d) potential applications. The qAOP used as an illustrative example for these points describes the linkage between inhibition of cytochrome P450 19A aromatase (the MIE) and population-level decreases in the fathead minnow (FHM; Pimephales promelas). The qAOP consists of three linked computational models for the following: (a) the hypothalamic-pitutitary-gonadal axis in female FHMs, where aromatase inhibition decreases the conversion of testosterone to 17ß-estradiol (E2), thereby reducing E2-dependent vitellogenin (VTG; egg yolk protein precursor) synthesis, (b) VTG-dependent egg development and spawning (fecundity), and (c) fecundity-dependent population trajectory. While development of the example qAOP was based on experiments with FHMs exposed to the aromatase inhibitor fadrozole, we also show how a toxic equivalence (TEQ) calculation allows use of the qAOP to predict effects of another, untested aromatase inhibitor, iprodione. While qAOP development can be resource-intensive, the quantitative predictions obtained, and TEQ-based application to multiple chemicals, may be sufficient to justify the cost for some applications in regulatory decision-making.
Subject(s)
Aromatase Inhibitors/toxicity , Fadrozole/toxicity , Animals , Cyprinidae , Estradiol/metabolism , Models, Theoretical , Predictive Value of Tests , Vitellogenins/metabolismABSTRACT
We evaluate the capability of convolutional neural networks (CNNs) to predict a velocity field as it relates to fluid flow around various arrangements of obstacles within a two-dimensional, rectangular channel. We base our network architecture on a gated residual U-Net template and train it on velocity fields generated from computational fluid dynamics (CFD) simulations. We then assess the extent to which our model can accurately and efficiently predict steady flows in terms of velocity fields associated with inlet speeds and obstacle configurations not included in our training set. Real-world applications often require fluid-flow predictions in larger and more complex domains that contain more obstacles than used in model training. To address this problem, we propose a method that decomposes a domain into subdomains for which our model can individually and accurately predict the fluid flow, after which we apply smoothness and continuity constraints to reconstruct velocity fields across the whole of the original domain. This piecewise, semicontinuous approach is computationally more efficient than the alternative, which involves generation of CFD datasets required to retrain the model on larger and more spatially complex domains. We introduce a local orientational vector field entropy (LOVE) metric, which quantifies a decorrelation scale for velocity fields in geometric domains with one or more obstacles, and use it to devise a strategy for decomposing complex domains into weakly interacting subsets suitable for application of our modeling approach. We end with an assessment of error propagation across modeled domains of increasing size.
ABSTRACT
BACKGROUND: Proteins search along the DNA for targets, such as transcription initiation sequences, according to one-dimensional diffusion, which is interrupted by micro- and macro-hopping events and intersegmental transfers that occur under close packing conditions. RESULTS: A one-dimensional diffusion-reaction model in the form of difference-differential equations is proposed to analyze the nonequilibrium protein sliding kinetics along a segment of bacterial DNA. A renormalization approach is used to derive an expression for the mean first-passage time to arrive at sites downstream of the origin from the occupation probabilities given by the individual transport equations. Monte Carlo simulations are employed to assess the validity of the proposed approach, and all results are interpreted within the context of bacterial transcription. CONCLUSIONS: Mean first-passage times decrease with increasing reaction rates, indicating that, on average, surviving proteins more rapidly locate downstream targets than their reaction-free counterparts, but at the price of increasing rarity. Two qualitatively different screening regimes are identified according to whether the search process operates under "small" or "large" values for the dissociation rate of the protein-DNA complex. Lower bounds are placed on the overall search time for varying reactive conditions. Good agreement with experimental estimates requires the reaction rate reside near the transition between both screening regimes, suggesting that biology balances a need for rapid searches against maximum exploration during each round of the sliding phase.
Subject(s)
Bacterial Proteins/metabolism , DNA, Bacterial/metabolism , DNA-Binding Proteins/metabolism , Models, Biological , Diffusion , Kinetics , Monte Carlo Method , Reproducibility of ResultsABSTRACT
Establishing formal mathematical analogies between disparate physical systems can be a powerful tool, allowing for the well studied behavior of one system to be directly translated into predictions about the behavior of another that may be harder to probe. In this paper we lay the foundation for such an analogy between the macroscale electrodynamics of simple magnetic circuits and the microscale chemical kinetics of transcriptional regulation in cells. By artificially allowing the inductor coils of the former to elastically expand under the action of their Lorentz pressure, we introduce nonlinearities into the system that we interpret through the lens of our analogy as a schematic model for the impact of crosstalk on the rates of gene expression near steady state. Synthetic plasmids introduced into a cell must compete for a finite pool of metabolic and enzymatic resources against a maelstrom of crisscrossing biological processes, and our theory makes sensible predictions about how this noisy background might impact the expression profiles of synthetic constructs without explicitly modeling the kinetics of numerous interconnected regulatory interactions. We conclude the paper with a discussion of how our theory might be expanded to a broader class of plasmid circuits and how our predictions might be tested experimentally.
Subject(s)
Models, Biological , Gene Regulatory Networks , Kinetics , Signal TransductionABSTRACT
The transcriptional network determines a cell's internal state by regulating protein expression in response to changes in the local environment. Due to the interconnected nature of this network, information encoded in the abundance of various proteins will often propagate across chains of noisy intermediate signaling events. The data-processing inequality (DPI) leads us to expect that this intracellular game of "telephone" should degrade this type of signal, with longer chains losing successively more information to noise. However, a previous modeling effort predicted that because the steps of these signaling cascades do not truly represent independent stages of data processing, the limits of the DPI could seemingly be surpassed, and the amount of transmitted information could actually increase with chain length. What that work did not examine was whether this regime of growing information transmission was attainable by a signaling system constrained by the mechanistic details of more complex protein-binding kinetics. Here we address this knowledge gap through the lens of information theory by examining a model that explicitly accounts for the binding of each transcription factor to DNA. We analyze this model by comparing stochastic simulations of the fully nonlinear kinetics to simulations constrained by the linear response approximations that displayed a regime of growing information. Our simulations show that even when molecular binding is considered, there remains a regime wherein the transmitted information can grow with cascade length, but ends after a critical number of links determined by the kinetic parameter values. This inflection point marks where correlations decay in response to an oversaturation of binding sites, screening informative transcription factor fluctuations from further propagation down the chain where they eventually become indistinguishable from the surrounding levels of noise.
Subject(s)
Gene Expression Regulation , Gene Regulatory Networks , Models, Biological , Signal Transduction , Animals , Humans , KineticsABSTRACT
The SARS-CoV-2 virus is responsible for the novel coronavirus disease 2019 (COVID-19), which has spread to populations throughout the continental United States. Most state and local governments have adopted some level of "social distancing" policy, but infections have continued to spread despite these efforts. Absent a vaccine, authorities have few other tools by which to mitigate further spread of the virus. This begs the question of how effective social policy really is at reducing new infections that, left alone, could potentially overwhelm the existing hospitalization capacity of many states. We developed a mathematical model that captures correlations between some state-level "social distancing" policies and infection kinetics for all U.S. states, and use it to illustrate the link between social policy decisions, disease dynamics, and an effective reproduction number that changes over time, for case studies of Massachusetts, New Jersey, and Washington states. In general, our findings indicate that the potential for second waves of infection, which result after reopening states without an increase to immunity, can be mitigated by a return of social distancing policies as soon as possible after the waves are detected.
Subject(s)
COVID-19/epidemiology , Health Policy , COVID-19/pathology , COVID-19/virology , Databases, Factual , Humans , Massachusetts/epidemiology , New Jersey/epidemiology , Physical Distancing , Public Policy , SARS-CoV-2/isolation & purification , Washington/epidemiologyABSTRACT
Gene drives offer unprecedented control over the fate of natural ecosystems by leveraging non-Mendelian inheritance mechanisms to proliferate synthetic genes across wild populations. However, these benefits are offset by a need to avoid the potentially disastrous consequences of unintended ecological interactions. The efficacy of many gene-editing drives has been brought into question due to predictions that they will inevitably be thwarted by the emergence of drive-resistant mutations, but these predictions derive largely from models of large or infinite populations that cannot be driven to extinction faster than mutations can fixate. To address this issue, we characterize the impact of a simple, meiotic gene drive on a small, homeostatic population whose genotypic composition may vary due to the stochasticity inherent in natural mating events (e.g., partner choice, number of offspring) or the genetic inheritance process (e.g., mutation rate, gene drive fitness). To determine whether the ultimate genotypic fate of such a population is sensitive to such stochastic fluctuations, we compare the results of two dynamical models: a deterministic model that attempts to predict how the genetics of an average population evolve over successive generations, and an agent-based model that examines how stable these predictions are to fluctuations. We find that, even on average, our stochastic model makes qualitatively distinct predictions from those of the deterministic model, and we identify the source of these discrepancies as a dynamic instability that arises at short times, when genetic diversity is maximized as a consequence of the gene drive's rapid proliferation. While we ultimately conclude that extinction can only beat out the fixation of drive-resistant mutations over a limited region of parameter space, the reason for this is more complex than previously understood, which could open new avenues for engineered gene drives to circumvent this weakness.
Subject(s)
Gene Drive Technology , Homeostasis/genetics , Meiosis/genetics , Models, GeneticABSTRACT
RNA aptamers are relatively short nucleic acid sequences that bind targets with high affinity, and when combined with a riboswitch that initiates translation of a fluorescent reporter protein, can be used as a biosensor for chemical detection in various types of media. These processes span target binding at the molecular scale to fluorescence detection at the macroscale, which involves a number of intermediate rate-limiting physical (e.g., molecular conformation change) and biochemical changes (e.g., reaction velocity), which together complicate assay design. Here we describe a mathematical model developed to aid environmental detection of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) using the DsRed fluorescent reporter protein, but is general enough to potentially predict fluorescence from a broad range of water-soluble chemicals given the values of just a few kinetic rate constants as input. If we expose a riboswitch test population of Escherichia coli bacteria to a chemical dissolved in media, then the model predicts an empirically distinct, power-law relationship between the exposure concentration and the elapsed time of exposure. This relationship can be used to deduce an exposure time that meets or exceeds the optical threshold of a fluorescence detection device and inform new biosensor designs.
Subject(s)
Aptamers, Nucleotide/chemistry , Riboswitch , Triazines/chemistry , Biosensing TechniquesABSTRACT
Large scale biological responses are inherently uncertain, in part as a consequence of noisy systems that do not respond deterministically to perturbations and measurement errors inherent to technological limitations. As a result, they are computationally difficult to model and current approaches are notoriously slow and computationally intensive (multiscale stochastic models), fail to capture the effects of noise across a system (chemical kinetic models), or fail to provide sufficient biological fidelity because of broad simplifying assumptions (stochastic differential equations). We use a new approach to modeling multiscale stationary biological processes that embraces the noise found in experimental data to provide estimates of the parameter uncertainties and the potential mis-specification of models. Our approach models the mean stationary response at each biological level given a particular expected response relationship, capturing variation around this mean using conditional Monte Carlo sampling that is statistically consistent with training data. A conditional probability distribution associated with a biological response can be reconstructed using this method for a subset of input values, which overcomes the parameter identification problem. Our approach could be applied in addition to dynamical modeling methods (see above) to predict uncertain biological responses over experimental time scales. To illustrate this point, we apply the approach to a test case in which we model the variation associated with measurements at multiple scales of organization across a reproduction-related Adverse Outcome Pathway described for teleosts.
Subject(s)
Computer Simulation , Cyprinidae/physiology , Models, Biological , Algorithms , Animals , Female , Monte Carlo Method , Reproduction , Stochastic ProcessesABSTRACT
Explosives such as hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) are common contaminants found in soil and groundwater at military facilities worldwide, but large-scale monitoring of these contaminants at low concentrations is difficult. Biosensors that incorporate aptamers with high affinity and specificity for a target are a novel way of detecting these compounds. This work describes novel riboswitch-based biosensors for detecting RDX. The performance of the RDX riboswitch was characterized in Escherichia coli using a range of RDX concentrations from 0-44 µmol l-1. Fluorescence was induced at RDX concentrations as low as 0.44 µmol l-1. The presence of 4.4 µmol l-1 RDX induced an 8-fold increase in fluorescence and higher concentrations did not induce a statistically significant increase in response.
Subject(s)
Biosensing Techniques/methods , Environmental Monitoring/methods , Environmental Pollutants/analysis , Explosive Agents/analysis , Triazines/analysis , Aptamers, Nucleotide/chemistry , Aptamers, Nucleotide/genetics , Escherichia coli/genetics , Escherichia coli/metabolism , Luminescent Measurements , Luminescent Proteins/genetics , Luminescent Proteins/metabolism , Riboswitch/geneticsABSTRACT
BACKGROUND: A challenge of in vitro to in vivo extrapolation (IVIVE) is to predict the physical state of organisms exposed to chemicals in the environment from in vitro exposure assay data. Although toxicokinetic modeling approaches promise to bridge in vitro screening data with in vivo effects, they are often encumbered by a need for redesign or re-parameterization when applied to different tissues or chemicals. RESULTS: We demonstrate a parameterization of reverse toxicokinetic (rTK) models developed for the adult zebrafish (Danio rerio) based upon particle swarm optimizations (PSO) of the chemical uptake and degradation rates that predict bioconcentration factors (BCF) for a broad range of chemicals. PSO reveals a relationship between chemical uptake and decomposition parameter values that predicts chemical-specific BCF values with moderate statistical agreement to a limited yet diverse chemical dataset, and all without a need to retrain the model to new data. CONCLUSIONS: The presented model requires only the octanol-water partitioning ratio to predict BCFs to a fidelity consistent with existing QSAR models. This success begs re-evaluation of the modeling assumptions; specifically, it suggests that chemical uptake into arterial blood may be limited by transport across gill membranes (diffusion) rather than by counter-current flow between gill lamellae (convection). Therefore, more detailed molecular modeling of aquatic respiration may further improve predictive accuracy of the rTK approach.
Subject(s)
Models, Biological , Zebrafish/metabolism , Animals , Biological Transport , ToxicokineticsABSTRACT
BACKGROUND: Physiologically-based toxicokinetic (PBTK) models are often developed to facilitate in vitro to in vivo extrapolation (IVIVE) using a top-down, compartmental approach, favoring architectural simplicity over physiological fidelity despite the lack of general guidelines relating model design to dynamical predictions. Here we explore the impact of design choice (high vs. low fidelity) on chemical distribution throughout an animal's organ system. RESULTS: We contrast transient dynamics and steady states of three previously proposed PBTK models of varying complexity in response to chemical exposure. The steady states for each model were determined analytically to predict exposure conditions from tissue measurements. Steady state whole-body concentrations differ between models, despite identical environmental conditions, which originates from varying levels of physiological fidelity captured by the models. These differences affect the relative predictive accuracy of the inverted models used in exposure reconstruction to link effects-based exposure data with whole-organism response thresholds obtained from in vitro assay measurements. CONCLUSIONS: Our results demonstrate how disregarding physiological fideltiy in favor of simpler models affects the internal dynamics and steady state estimates for chemical accumulation within tissues, which, in turn, poses significant challenges for the exposure reconstruction efforts that underlie many IVIVE methods. Developing standardized systems-level models for ecological organisms would not only ensure predictive consistency among future modeling studies, but also ensure pragmatic extrapolation of in vivo effects from in vitro data or modeling exposure-response relationships.
Subject(s)
Models, Biological , Toxicokinetics , Animals , Cyprinidae/metabolism , Estrogens/pharmacokinetics , Estrogens/toxicity , Zebrafish/metabolismABSTRACT
Insensitive munitions (IMs) improve soldier safety by decreasing sympathetic detonation during training and use in theatre. IMs are being increasingly deployed, although the environmental effects of IM constituents such as nitroguanidine (NQ) and IM mixture formulations such as IMX-101 remain largely unknown. In the present study, we investigated the acute (96h) toxicity of NQ and IMX-101 to zebrafish larvae (21d post-fertilization), both in the parent materials and after the materials had been irradiated with environmentally-relevant levels of ultraviolet (UV) light. The UV-treatment increased the toxicity of NQ by 17-fold (LC50 decreased from 1323mg/L to 77.2mg/L). Similarly, UV-treatment increased the toxicity of IMX-101 by nearly two fold (LC50 decreased from 131.3 to 67.6mg/L). To gain insight into the cause(s) of the observed UV-enhanced toxicity of the IMs, comparative molecular responses to parent and UV-treated IMs were assessed using microarray-based global transcript expression assays. Both gene set enrichment analysis (GSEA) and differential transcript expression analysis coupled with pathway and annotation cluster enrichment were conducted to provide functional interpretations of expression results and hypothetical modes of toxicity. The parent NQ exposure caused significant enrichment of functions related to immune responses and proteasome-mediated protein metabolism occurring primarily at low, sublethal exposure levels (5.5 and 45.6mg/L). Enriched functions in the IMX-101 exposure were indicative of increased xenobiotic metabolism, oxidative stress mitigation, protein degradation, and anti-inflammatory responses, each of which displayed predominantly positive concentration-response relationships. UV-treated NQ had a fundamentally different transcriptomic expression profile relative to parent NQ causing positive concentration-response relationships for genes involved in oxidative-stress mitigation pathways and inhibited expression of multiple cadherins that facilitate zebrafish neurological and retinal development. Transcriptomic profiles were similar between UV-treated versus parent IMX-101 exposures. However, more significant and diverse enrichment as well as greater magnitudes of differential expression for oxidative stress responses were observed in UV-treated IMX-101 exposures. Further, transcriptomics indicated potential for cytokine signaling suppression providing potential connections between oxidative stress and anti-inflammatory responses. Given the overall results, we hypothesize that the increased toxicity of UV-irradiated NQ and the IMX-101 mixture result from breakdown products with elevated potential to elicit oxidative stress.
Subject(s)
Anisoles/toxicity , Guanidines/toxicity , Oxidative Stress/drug effects , Transcriptome/drug effects , Triazoles/toxicity , Ultraviolet Rays , Water Pollutants, Chemical/toxicity , Zebrafish/metabolism , Animals , Anisoles/radiation effects , Dose-Response Relationship, Drug , Gene Expression Profiling , Guanidines/radiation effects , Larva/drug effects , Larva/metabolism , Nitro Compounds/radiation effects , Nitro Compounds/toxicity , Oxidative Stress/genetics , Triazoles/radiation effects , Water Pollutants, Chemical/radiation effectsABSTRACT
The internal biochemical state of a cell is regulated by a vast transcriptional network that kinetically correlates the concentrations of numerous proteins. Fluctuations in protein concentration that encode crucial information about this changing state must compete with fluctuations caused by the noisy cellular environment in order to successfully transmit information across the network. Oftentimes, one protein must regulate another through a sequence of intermediaries, and conventional wisdom, derived from the data processing inequality of information theory, leads us to expect that longer sequences should lose more information to noise. Using the metric of mutual information to characterize the fluctuation sensitivity of transcriptional signaling cascades, we find, counter to this expectation, that longer chains of regulatory interactions can instead lead to enhanced informational efficiency. We derive an analytic expression for the mutual information from a generalized chemical kinetics model that we reduce to simple, mass-action kinetics by linearizing for small fluctuations about the basal biological steady state, and we find that at long times this expression depends only on a simple ratio of protein production to destruction rates and the length of the cascade. We place bounds on the values of these parameters by requiring that the mutual information be at least one bit-otherwise, any received signal would be indistinguishable from noise-and we find not only that nature has devised a way to circumvent the data processing inequality, but that it must be circumvented to attain this one-bit threshold. We demonstrate how this result places informational and biochemical efficiency at odds with one another by correlating high transcription factor binding affinities with low informational output, and we conclude with an analysis of the validity of our assumptions and propose how they might be tested experimentally.
Subject(s)
Models, Biological , Signal Transduction , Gene Regulatory Networks , Kinetics , Models, ChemicalABSTRACT
Life cycle assessment (LCA) has considerable merit for holistic evaluation of product planning, development, production, and disposal, with the inherent benefit of providing a forecast of potential health and environmental impacts. However, a technical review of current life cycle impact assessment (LCIA) methods revealed limitations within the biological effects assessment protocols, including: simplistic assessment approaches and models; an inability to integrate emerging types of toxicity data; a reliance on linear impact assessment models; a lack of methods to mitigate uncertainty; and no explicit consideration of effects in species of concern. The purpose of the current study is to demonstrate that a new concept in toxicological and regulatory assessment, the adverse outcome pathway (AOP), has many useful attributes of potential use to ameliorate many of these problems, to expand data utility and model robustness, and to enable more accurate and defensible biological effects assessments within LCIA. Background, context, and examples have been provided to demonstrate these potential benefits. We additionally propose that these benefits can be most effectively realized through development of quantitative AOPs (qAOPs) crafted to meet the needs of the LCIA framework. As a means to stimulate qAOP research and development in support of LCIA, we propose 3 conceptual classes of qAOP, each with unique inherent attributes for supporting LCIA: 1) mechanistic, including computational toxicology models; 2) probabilistic, including Bayesian networks and supervised machine learning models; and 3) weight of evidence, including models built using decision-analytic methods. Overall, we have highlighted a number of potential applications of qAOPs that can refine and add value to LCIA. As the AOP concept and support framework matures, we see the potential for qAOPs to serve a foundational role for next-generation effects characterization within LCIA. Integr Environ Assess Manag 2016;12:580-590. Published 2015. This article is a US Government work and is in the public domain in the USA.