RESUMO
Introduction: The U. S. Environmental Protection Agency's Endocrine Disruptor Screening Program (EDSP) Tier 1 assays are used to screen for potential endocrine system-disrupting chemicals. A model integrating data from 16 high-throughput screening assays to predict estrogen receptor (ER) agonism has been proposed as an alternative to some low-throughput Tier 1 assays. Later work demonstrated that as few as four assays could replicate the ER agonism predictions from the full model with 98% sensitivity and 92% specificity. The current study utilized chemical clustering to illustrate the coverage of the EDSP Universe of Chemicals (UoC) tested in the existing ER pathway models and to investigate the utility of chemical clustering to evaluate the screening approach using an existing 4-assay model as a test case. Although the full original assay battery is no longer available, the demonstrated contribution of chemical clustering is broadly applicable to assay sets, chemical inventories, and models, and the data analysis used can also be applied to future evaluation of minimal assay models for consideration in screening. Methods: Chemical structures were collected for 6,947 substances via the CompTox Chemicals Dashboard from the over 10,000 UoC and grouped based on structural similarity, generating 826 chemical clusters. Of the 1,812 substances run in the original ER model, 1,730 substances had a single, clearly defined structure. The ER model chemicals with a clearly defined structure that were not present in the EDSP UoC were assigned to chemical clusters using a k-nearest neighbors approach, resulting in 557 EDSP UoC clusters containing at least one ER model chemical. Results and Discussion: Performance of an existing 4-assay model in comparison with the existing full ER agonist model was analyzed as related to chemical clustering. This was a case study, and a similar analysis can be performed with any subset model in which the same chemicals (or subset of chemicals) are screened. Of the 365 clusters containing >1 ER model chemical, 321 did not have any chemicals predicted to be agonists by the full ER agonist model. The best 4-assay subset ER agonist model disagreed with the full ER agonist model by predicting agonist activity for 122 chemicals from 91 of the 321 clusters. There were 44 clusters with at least two chemicals and at least one agonist based upon the full ER agonist model, which allowed accuracy predictions on a per-cluster basis. The accuracy of the best 4-assay subset ER agonist model ranged from 50% to 100% across these 44 clusters, with 32 clusters having accuracy ≥90%. Overall, the best 4-assay subset ER agonist model resulted in 122 false-positive and only 2 false-negative predictions compared with the full ER agonist model. Most false positives (89) were active in only two of the four assays, whereas all but 11 true positive chemicals were active in at least three assays. False positive chemicals also tended to have lower area under the curve (AUC) values, with 110 out of 122 false positives having an AUC value below 0.214, which is lower than 75% of the positives as predicted by the full ER agonist model. Many false positives demonstrated borderline activity. The median AUC value for the 122 false positives from the best 4-assay subset ER agonist model was 0.138, whereas the threshold for an active prediction is 0.1. Conclusion: Our results show that the existing 4-assay model performs well across a range of structurally diverse chemicals. Although this is a descriptive analysis of previous results, several concepts can be applied to any screening model used in the future. First, the clustering of the chemicals provides a means of ensuring that future screening evaluations consider the broad chemical space represented by the EDSP UoC. The clusters can also assist in prioritizing future chemicals for screening in specific clusters based on the activity of known chemicals in those clusters. The clustering approach can be useful in providing a framework to evaluate which portions of the EDSP UoC chemical space are reliably covered by in silico and in vitro approaches and where predictions from either method alone or both methods combined are most reliable. The lessons learned from this case study can be easily applied to future evaluations of model applicability and screening to evaluate future datasets.
RESUMO
Introduction: Computational models using data from high-throughput screening assays have promise for prioritizing and screening chemicals for testing under the U.S. Environmental Protection Agency's Endocrine Disruptor Screening Program (EDSP). The purpose of this work was to demonstrate a data processing method for the determination of optimal minimal assay batteries from a larger comprehensive model, to provide a uniform method of evaluating the performance of future minimal assay batteries compared with the androgen receptor (AR) pathway model, and to incorporate chemical cluster analysis into this evaluation. Although several of the assays in the AR pathway model are no longer available through the original vendor, this approach could be used for future evaluations of minimal assay models for prioritization and screening. Methods: We compared two previously published models and found that an expanded 14-assay model had higher sensitivity for antagonists, whereas the original 11-assay model had slightly higher sensitivity for agonists. We then investigated subsets of assays in the original AR pathway model to optimize overall testing strategies that minimize cost while maintaining sensitivity across a broad chemical space. Results and Discussion: Evaluation of the critical assays across subset models derived from the 14-assay model identified three critical assays for predicting antagonism and two critical assays for predicting agonism. A minimum of nine assays is required for predicting agonism and antagonism with high sensitivity (95%). However, testing workflows guided by chemical structure-based clusters can reduce the average number of assays needed per chemical by basing the assays selected for testing on the likelihood of a chemical being an AR agonist, according to its structure. Our results show that a multi-stage testing workflow can provide 95% sensitivity while requiring only 48% of the resources required for running all assays from the original full models. The resources can be reduced further by incorporating in silico activity predictions. Conclusion: This work illustrates a data-driven approach that incorporates chemical clustering and simultaneous consideration of antagonism and agonism mechanisms to more efficiently screen chemicals. This case study provides a proof of concept for prioritization and screening strategies that can be utilized in future analyses to minimize the overall number of assays needed for predicting AR activity, which will maximize the number of chemicals that can be tested and allow data-driven prioritization of chemicals for further screening under the EDSP.
RESUMO
The toxic equivalency factors (TEFs) approach for dioxin-like chemicals (DLCs) is currently based on a qualitative assessment of a heterogeneous data set of relative estimates of potency (REPs) spanning several orders of magnitude with highly variable study quality and relevance. An effort was undertaken to develop a weighting framework to systematically evaluate and quantitatively integrate the quality and relevance for development of more robust TEFs. Six main-study characteristics were identified as most important in characterizing the quality and relevance of an individual REP for human health risk assessment: study type, study model, pharmacokinetics, REP derivation method, REP derivation quality, and endpoint. Subsequently, a computational approach for quantitatively integrating the weighting framework parameters was developed and applied to the REP2004 database. This was accomplished using a machine learning approach which infers a weighted TEF distribution for each congener. The resulting database, weighted for quality and relevance, provides REP distributions from >600 data sets (including in vivo and in vitro studies, a range of endpoints, etc.). This weighted database provides a flexible platform for systematically and objectively characterizing TEFs for use in risk assessment, as well as providing information to characterize uncertainty and variability. Collectively, this information provides risk managers with information for decision making.
Assuntos
Dioxinas , Bifenilos Policlorados , Dibenzodioxinas Policloradas , Humanos , Dioxinas/toxicidade , Medição de Risco , Incerteza , Bases de Dados FactuaisRESUMO
In 2005, the World Health Organization (WHO) re-evaluated Toxic Equivalency factors (TEFs) developed for dioxin-like compounds believed to act through the Ah receptor based on an updated database of relative estimated potency (REP)(REP2004 database). This re-evalution identified the need to develop a consistent approach for dose-response modeling. Further, the WHO Panel discussed the significant heterogeneity of experimental datasets and dataset quality underlying the REPs in the database. There is a critical need to develop a quantitative, and quality weighted approach to characterize the TEF for each congener. To address this, a multi-tiered approach that combines Bayesian dose-response fitting and meta-regression with a machine learning model to predict REPS' quality categorizations was developed to predict the most likely relationship between each congener and its reference and derive model-predicted TEF uncertainty distributions. As a proof of concept, this 'Best-Estimate TEF workflow' was applied to the REP2004 database to derive TEF point-estimates and characterizations of uncertainty for all congeners. Model-TEFs were similar to the 2005 WHO TEFs, with the data-poor congeners having larger levels of uncertainty. This transparent and reproducible computational workflow incorporates WHO expert panel recommendations and represents a substantial improvement in the TEF methodology.
Assuntos
Dioxinas , Bifenilos Policlorados , Dioxinas/toxicidade , Teorema de Bayes , Medição de Risco , Incerteza , Receptores de Hidrocarboneto ArílicoRESUMO
Exposure science is evolving from its traditional "after the fact" and "one chemical at a time" approach to forecasting chemical exposures rapidly enough to keep pace with the constantly expanding landscape of chemicals and exposures. In this article, we provide an overview of the approaches, accomplishments, and plans for advancing computational exposure science within the U.S. Environmental Protection Agency's Office of Research and Development (EPA/ORD). First, to characterize the universe of chemicals in commerce and the environment, a carefully curated, web-accessible chemical resource has been created. This DSSTox database unambiguously identifies >1.2 million unique substances reflecting potential environmental and human exposures and includes computationally accessible links to each compound's corresponding data resources. Next, EPA is developing, applying, and evaluating predictive exposure models. These models increasingly rely on data, computational tools like quantitative structure activity relationship (QSAR) models, and machine learning/artificial intelligence to provide timely and efficient prediction of chemical exposure (and associated uncertainty) for thousands of chemicals at a time. Integral to this modeling effort, EPA is developing data resources across the exposure continuum that includes application of high-resolution mass spectrometry (HRMS) non-targeted analysis (NTA) methods providing measurement capability at scale with the number of chemicals in commerce. These research efforts are integrated and well-tailored to support population exposure assessment to prioritize chemicals for exposure as a critical input to risk management. In addition, the exposure forecasts will allow a wide variety of stakeholders to explore sustainable initiatives like green chemistry to achieve economic, social, and environmental prosperity and protection of future generations.
Assuntos
Poluentes Ambientais , Estados Unidos , Humanos , Poluentes Ambientais/análise , United States Environmental Protection Agency , Inteligência Artificial , Gestão de Riscos , Incerteza , Exposição Ambiental/análise , Medição de RiscoRESUMO
The rapid characterization of risk to humans and ecosystems from exogenous chemicals requires information on both hazard and exposure. The U.S. Environmental Protection Agency's ToxCast program and the interagency Tox21 initiative have screened thousands of chemicals in various high-throughput (HT) assay systems for in vitro bioactivity. EPA's ExpoCast program is developing complementary HT methods for characterizing the human and ecological exposures necessary to interpret HT hazard data in a real-world risk context. These new approach methodologies (NAMs) for exposure include computational and analytical tools for characterizing multiple components of the complex pathways chemicals take from their source to human and ecological receptors. Here, we analyze the landscape of exposure NAMs developed in ExpoCast in the context of various chemical lists of scientific and regulatory interest, including the ToxCast and Tox21 libraries and the Toxic Substances Control Act (TSCA) inventory. We examine the landscape of traditional and exposure NAM data covering chemical use, emission, environmental fate, toxicokinetics, and ultimately external and internal exposure. We consider new chemical descriptors, machine learning models that draw inferences from existing data, high-throughput exposure models, statistical frameworks that integrate multiple model predictions, and non-targeted analytical screening methods that generate new HT monitoring information. We demonstrate that exposure NAMs drastically improve the coverage of the chemical landscape compared to traditional approaches and recommend a set of research activities to further expand the development of HT exposure data for application to risk characterization. Continuing to develop exposure NAMs to fill priority data gaps identified here will improve the availability and defensibility of risk-based metrics for use in chemical prioritization and screening. IMPACT: This analysis describes the current state of exposure assessment-based new approach methodologies across varied chemical landscapes and provides recommendations for filling key data gaps.
Assuntos
Ecossistema , Estados Unidos , HumanosRESUMO
BACKGROUND: Toxicokinetic (TK) data needed for chemical risk assessment are not available for most chemicals. To support a greater number of chemicals, the U.S. Environmental Protection Agency (EPA) created the open-source R package "httk" (High Throughput ToxicoKinetics). The "httk" package provides functions and data tables for simulation and statistical analysis of chemical TK, including a population variability simulator that uses biometrics data from the National Health and Nutrition Examination Survey (NHANES). OBJECTIVE: Here we modernize the "HTTK-Pop" population variability simulator based on the currently available data and literature. We provide explanations of the algorithms used by "httk" for variability simulation and uncertainty propagation. METHODS: We updated and revised the population variability simulator in the "httk" package with the most recent NHANES biometrics (up to the 2017-18 NHANES cohort). Model equations describing glomerular filtration rate (GFR) were revised to more accurately represent physiology and population variability. The model output from the updated "httk" package was compared with the current version. RESULTS: The revised population variability simulator in the "httk" package now provides refined, more relevant, and better justified estimations. SIGNIFICANCE: Fulfilling the U.S. EPA's mission to provide open-source data and models for evaluations and applications by the broader scientific community, and continuously improving the accuracy of the "httk" package based on the currently available data and literature.
Assuntos
Inquéritos Nutricionais , Estados Unidos , Humanos , United States Environmental Protection AgencyRESUMO
To estimate potential chemical risk, tools are needed to prioritize potential exposures for chemicals with minimal data. Consumer product exposures are a key pathway, and variability in consumer use patterns is an important factor. We designed Ex Priori, a flexible dashboard-type screening-level exposure model, to rapidly visualize exposure rankings from consumer product use. Ex Priori is Excel-based. Currently, it is parameterized for seven routes of exposure for 1108 chemicals present in 228 consumer product types. It includes toxicokinetics considerations to estimate body burden. It includes a simple framework for rapid modeling of broad changes in consumer use patterns by product category. Ex Priori rapidly models changes in consumer user patterns during the COVID-19 pandemic and instantly shows resulting changes in chemical exposure rankings by body burden. Sensitivity analysis indicates that the model is sensitive to the air emissions rate of chemicals from products. Ex Priori's simple dashboard facilitates dynamic exploration of the effects of varying consumer product use patterns on prioritization of chemicals based on potential exposures. Ex Priori can be a useful modeling and visualization tool to both novice and experienced exposure modelers and complement more computationally intensive population-based exposure models.
RESUMO
Research in environmental health is becoming increasingly reliant upon data science and computational methods that can more efficiently extract information from complex datasets. Data science and computational methods can be leveraged to better identify relationships between exposures to stressors in the environment and human disease outcomes, representing critical information needed to protect and improve global public health. Still, there remains a critical gap surrounding the training of researchers on these in silico methods. We aimed to address this gap by developing the inTelligence And Machine lEarning (TAME) Toolkit, promoting trainee-driven data generation, management, and analysis methods to "TAME" data in environmental health studies. Training modules were developed to provide applications-driven examples of data organization and analysis methods that can be used to address environmental health questions. Target audiences for these modules include students, post-baccalaureate and post-doctorate trainees, and professionals that are interested in expanding their skillset to include recent advances in data analysis methods relevant to environmental health, toxicology, exposure science, epidemiology, and bioinformatics/cheminformatics. Modules were developed by study coauthors using annotated script and were organized into three chapters within a GitHub Bookdown site. The first chapter of modules focuses on introductory data science, which includes the following topics: setting up R/RStudio and coding in the R environment; data organization basics; finding and visualizing data trends; high-dimensional data visualizations; and Findability, Accessibility, Interoperability, and Reusability (FAIR) data management practices. The second chapter of modules incorporates chemical-biological analyses and predictive modeling, spanning the following methods: dose-response modeling; machine learning and predictive modeling; mixtures analyses; -omics analyses; toxicokinetic modeling; and read-across toxicity predictions. The last chapter of modules was organized to provide examples on environmental health database mining and integration, including chemical exposure, health outcome, and environmental justice indicators. Training modules and associated data are publicly available online (https://uncsrp.github.io/Data-Analysis-Training-Modules/). Together, this resource provides unique opportunities to obtain introductory-level training on current data analysis methods applicable to 21st century science and environmental health.
RESUMO
Non-targeted analysis (NTA) methods are widely used for chemical discovery but seldom employed for quantitation due to a lack of robust methods to estimate chemical concentrations with confidence limits. Herein, we present and evaluate new statistical methods for quantitative NTA (qNTA) using high-resolution mass spectrometry (HRMS) data from EPA's Non-Targeted Analysis Collaborative Trial (ENTACT). Experimental intensities of ENTACT analytes were observed at multiple concentrations using a semi-automated NTA workflow. Chemical concentrations and corresponding confidence limits were first estimated using traditional calibration curves. Two qNTA estimation methods were then implemented using experimental response factor (RF) data (where RF = intensity/concentration). The bounded response factor method used a non-parametric bootstrap procedure to estimate select quantiles of training set RF distributions. Quantile estimates then were applied to test set HRMS intensities to inversely estimate concentrations with confidence limits. The ionization efficiency estimation method restricted the distribution of likely RFs for each analyte using ionization efficiency predictions. Given the intended future use for chemical risk characterization, predicted upper confidence limits (protective values) were compared to known chemical concentrations. Using traditional calibration curves, 95% of upper confidence limits were within ~tenfold of the true concentrations. The error increased to ~60-fold (ESI+) and ~120-fold (ESI-) for the ionization efficiency estimation method and to ~150-fold (ESI+) and ~130-fold (ESI-) for the bounded response factor method. This work demonstrates successful implementation of confidence limit estimation strategies to support qNTA studies and marks a crucial step towards translating NTA data in a risk-based context.
Assuntos
Incerteza , Calibragem , Espectrometria de Massas/métodosRESUMO
Molecular signatures are being increasingly integrated into predictive biology applications. However, there are limited studies comparing the overall predictivity of transcriptomic versus epigenomic signatures in relation to perinatal outcomes. This study set out to evaluate mRNA and microRNA (miRNA) expression and cytosine-guanine dinucleotide (CpG) methylation signatures in human placental tissues and relate these to perinatal outcomes known to influence maternal/fetal health; namely, birth weight, placenta weight, placental damage, and placental inflammation. The following hypotheses were tested: (1) different molecular signatures will demonstrate varying levels of predictivity towards perinatal outcomes, and (2) these signatures will show disruptions from an example exposure (ie, cadmium) known to elicit perinatal toxicity. Multi-omic placental profiles from 390 infants in the Extremely Low Gestational Age Newborns cohort were used to develop molecular signatures that predict each perinatal outcome. Epigenomic signatures (ie, miRNA and CpG methylation) consistently demonstrated the highest levels of predictivity, with model performance metrics including R2 (predicted vs observed) values of 0.36-0.57 for continuous outcomes and balanced accuracy values of 0.49-0.77 for categorical outcomes. Top-ranking predictors included miRNAs involved in injury and inflammation. To demonstrate the utility of these predictive signatures in screening of potentially harmful exogenous insults, top-ranking miRNA predictors were analyzed in a separate pregnancy cohort and related to cadmium. Key predictive miRNAs demonstrated altered expression in association with cadmium exposure, including miR-210, known to impact placental cell growth, blood vessel development, and fetal weight. These findings inform future predictive biology applications, where additional benefit will be gained by including epigenetic markers.
Assuntos
MicroRNAs , Metilação de DNA , Fosfatos de Dinucleosídeos/metabolismo , Feminino , Humanos , Recém-Nascido , Metilação , MicroRNAs/genética , MicroRNAs/metabolismo , Placenta/metabolismo , GravidezRESUMO
Computational methods are needed to more efficiently leverage data from in vitro cell-based models to predict what occurs within whole body systems after chemical insults. This study set out to test the hypothesis that in vitro high-throughput screening (HTS) data can more effectively predict in vivo biological responses when chemical disposition and toxicokinetic (TK) modeling are employed. In vitro HTS data from the Tox21 consortium were analyzed in concert with chemical disposition modeling to derive nominal, aqueous, and intracellular estimates of concentrations eliciting 50% maximal activity. In vivo biological responses were captured using rat liver transcriptomic data from the DrugMatrix and TG-Gates databases and evaluated for pathway enrichment. In vivo dosing data were translated to equivalent body concentrations using HTTK modeling. Random forest models were then trained and tested to predict in vivo pathway-level activity across 221 chemicals using in vitro bioactivity data and physicochemical properties as predictor variables, incorporating methods to address imbalanced training data resulting from high instances of inactivity. Model performance was quantified using the area under the receiver operator characteristic curve (AUC-ROC) and compared across pathways for different combinations of predictor variables. All models that included toxicokinetics were found to outperform those that excluded toxicokinetics. Biological interpretation of the model features revealed that rather than a direct mapping of in vitro assays to in vivo pathways, unexpected combinations of multiple in vitro assays predicted in vivo pathway-level activities. To demonstrate the utility of these findings, the highest-performing model was leveraged to make new predictions of in vivo biological responses across all biological pathways for remaining chemicals tested in Tox21 with adequate data coverage (n = 6617). These results demonstrate that, when chemical disposition and toxicokinetics are carefully considered, in vitro HT screening data can be used to effectively predict in vivo biological responses to chemicals.
RESUMO
INTRODUCTION: Toxicity data are unavailable for many thousands of chemicals in commerce and the environment. Therefore, risk assessors need to rapidly screen these chemicals for potential risk to public health. High-throughput screening (HTS) for in vitro bioactivity, when used with high-throughput toxicokinetic (HTTK) data and models, allows characterization of these thousands of chemicals. AREAS COVERED: This review covers generic physiologically based toxicokinetic (PBTK) models and high-throughput PBTK modeling for in vitro-in vivo extrapolation (IVIVE) of HTS data. We focus on 'httk', a public, open-source set of computational modeling tools and in vitro toxicokinetic (TK) data. EXPERT OPINION: HTTK benefits chemical risk assessors with its ability to support rapid chemical screening/prioritization, perform IVIVE, and provide provisional TK modeling for large numbers of chemicals using only limited chemical-specific data. Although generic TK model design can increase prediction uncertainty, these models provide offsetting benefits by increasing model implementation accuracy. Also, public distribution of the models and data enhances reproducibility. For the httk package, the modular and open-source design can enable the tool to be used and continuously improved by a broad user community in support of the critical need for high-throughput chemical prioritization and rapid dose estimation to facilitate rapid hazard assessments.
Assuntos
Ensaios de Triagem em Larga Escala/métodos , Modelos Biológicos , Toxicocinética , Animais , Simulação por Computador , Humanos , Reprodutibilidade dos Testes , Medição de Risco/métodosRESUMO
Regulatory agencies have derived noncancer toxicity values for 2,3,7,8-tetrachlorodibenzo-p-dioxin based on reduced sperm counts relying on single studies from a large body of evidence. Techniques such as meta-regression allow for greater use of the available data while simultaneously providing important information regarding the uncertainty associated with the underlying evidence base when conducting risk assessments. The objective herein was to apply systematic review methods and meta-regression to characterize the dose-response relationship of gestational exposure and epididymal sperm count. Twenty-three publications (20 animal studies consisting of 29 separate rat experimental data sets, and 3 epidemiology studies) met inclusion criteria. Risk of bias evaluation was performed to critically appraise study validity. Low to very low confidence precluded use of available epidemiological data as candidate studies for dose-response due to inconsistencies across the evidence base, high risk of bias, and general lack of biological coherence, including lack of clinical relevance and dose-response concordance. Experimental animal studies, which were found to have higher confidence following the structured assessment of confidence (eg, controlled exposure, biological consistency), were used as the basis of a meta-regression. Multiple models were fit; points of departure were identified and converted to human equivalent doses. The resulting reference dose estimates ranged from approximately 4 to 70 pg/kg/day, depending on model, benchmark response level, and study validity integration approach. This range of reference doses can be used either qualitatively or quantitatively to enhance understanding of human health risk estimates for dioxin-like compounds.
Assuntos
Dioxinas , Dibenzodioxinas Policloradas , Animais , Masculino , Ratos , Benchmarking , Relação Dose-Resposta a Droga , Epididimo , Dibenzodioxinas Policloradas/toxicidade , EspermatozoidesRESUMO
High-throughput and computational tools provide a new opportunity to calculate combined bioactivity of exposure to diverse chemicals acting through a common mechanism. We used high throughput in vitro bioactivity data and exposure predictions from the U.S. EPA's Toxicity and Exposure Forecaster (ToxCast and ExpoCast) to estimate combined estrogen receptor (ER) agonist activity of non-pharmaceutical chemical exposures for the general U.S. population. High-throughput toxicokinetic (HTTK) data provide conversion factors that relate bioactive concentrations measured in vitro (µM), to predicted population geometric mean exposure rates (mg/kg/day). These data were available for 22 chemicals with ER agonist activity and were estimated for other ER bioactive chemicals based on the geometric mean of HTTK values across chemicals. For each chemical, ER bioactivity across ToxCast assays was compared to predicted population geometric mean exposure at different levels of in vitro potency and model certainty. Dose additivity was assumed in calculating a Combined Exposure-Bioactivity Index (CEBI), the sum of exposure/bioactivity ratios. Combined estrogen bioactivity was also calculated in terms of the percent maximum bioactivity of chemical mixtures in human plasma using a concentration-addition model. Estimated CEBIs vary greatly depending on assumptions used for exposure and bioactivity. In general, CEBI values were <1 when using median of the estimated general population chemical intake rates, while CEBI were ≥1 when using the upper 95th confidence bound for those same intake rates for all chemicals. Concentration-addition model predictions of mixture bioactivity yield comparable results. Based on current in vitro bioactivity data, HTTK methods, and exposure models, combined exposure scenarios sufficient to influence estrogen bioactivity in the general population cannot be ruled out. Future improvements in screening methods and computational models could reduce uncertainty and better inform the potential combined effects of estrogenic chemicals.
Assuntos
Disruptores Endócrinos , Sistema Endócrino , Poluentes Ambientais , Ensaios de Triagem em Larga Escala , Bioensaio , Disruptores Endócrinos/toxicidade , Sistema Endócrino/efeitos dos fármacos , Poluentes Ambientais/toxicidade , Estrogênios , HumanosRESUMO
High(er) throughput toxicokinetics (HTTK) encompasses in vitro measures of key determinants of chemical toxicokinetics and reverse dosimetry approaches for in vitro-in vivo extrapolation (IVIVE). With HTTK, the bioactivity identified by any in vitro assay can be converted to human equivalent doses and compared with chemical intake estimates. Biological variability in HTTK has been previously considered, but the relative impact of measurement uncertainty has not. Bayesian methods were developed to provide chemical-specific uncertainty estimates for 2 in vitro toxicokinetic parameters: unbound fraction in plasma (fup) and intrinsic hepatic clearance (Clint). New experimental measurements of fup and Clint are reported for 418 and 467 chemicals, respectively. These data raise the HTTK chemical coverage of the ToxCast Phase I and II libraries to 57%. Although the standard protocol for Clint was followed, a revised protocol for fup measured unbound chemical at 10%, 30%, and 100% of physiologic plasma protein concentrations, allowing estimation of protein binding affinity. This protocol reduced the occurrence of chemicals with fup too low to measure from 44% to 9.1%. Uncertainty in fup was also reduced, with the median coefficient of variation dropping from 0.4 to 0.1. Monte Carlo simulation was used to propagate both measurement uncertainty and biological variability into IVIVE. The uncertainty propagation techniques used here also allow incorporation of other sources of uncertainty such as in silico predictors of HTTK parameters. These methods have the potential to inform risk-based prioritization based on the relationship between in vitro bioactivities and exposures.
Assuntos
Substâncias Perigosas/toxicidade , Fígado/efeitos dos fármacos , Modelos Biológicos , Toxicocinética , Teorema de Bayes , Simulação por Computador , Substâncias Perigosas/sangue , Substâncias Perigosas/farmacocinética , Ensaios de Triagem em Larga Escala , Humanos , Fígado/metabolismo , Taxa de Depuração Metabólica , Método de Monte Carlo , Ligação Proteica , Medição de Risco , IncertezaRESUMO
Carcinogenesis of the small intestine is rare in humans and rodents. Oral exposure to hexavalent chromium (Cr(VI)) and the fungicides captan and folpet induce intestinal carcinogenesis in mice. Previously (Toxicol Pathol. 330:48-52), we showed that B6C3F1 mice exposed to carcinogenic concentrations of Cr(VI), captan, or folpet for 28 days exhibited similar histopathological responses including villus enterocyte cytotoxicity and regenerative crypt epithelial hyperplasia. Herein, we analyze transcriptomic responses from formalin-fixed, paraffin-embedded duodenal sections from the aforementioned study. TempO-Seq technology and the S1500+ gene set were used to analyze transcription responses. Transcriptional responses were similar between all 3 agents; gene-level comparison identified 126/546 (23%) differentially expressed genes altered in the same direction, with a total of 25 upregulated pathways. These changes were related to cellular metabolism, stress, inflammatory/immune cell response, and cell proliferation, including upregulation in hypoxia inducible factor 1 (HIF-1) and activator protein 1 (AP1) signaling pathways, which have also been shown to be related to intestinal injury and angiogenesis/carcinogenesis. The similar molecular-, cellular-, and tissue-level changes induced by these 3 carcinogens can be informative for the development of an adverse outcome pathway for intestinal cancer.
Assuntos
Captana/toxicidade , Carcinógenos/toxicidade , Cromo/toxicidade , Intestino Delgado/efeitos dos fármacos , Ftalimidas/toxicidade , Animais , Perfilação da Expressão Gênica , Subunidade alfa do Fator 1 Induzível por Hipóxia/fisiologia , Intestino Delgado/metabolismo , Intestino Delgado/patologia , CamundongosRESUMO
Increasing interest in characterizing risk assessment uncertainty is highlighted by recent recommendations from the National Academy of Sciences. In this paper we demonstrate the utility of applying qualitative and quantitative methods for assessing uncertainty to enhance risk-based decision-making for 2,3,7,8-tetrachlorodibenzo-p-dioxin. The approach involved deconstructing the reference dose (RfD) via evaluation of the different assumptions, options, models and methods associated with derivation of the value, culminating in the development of a plausible range of potential values based on such areas of uncertainty. The results demonstrate that overall RfD uncertainty was high based on limitations in the process for selection (e.g., compliance with inclusion criteria related to internal validity of the co-critical studies, consistency with other studies), external validity (e.g., generalizing findings of acute, high-dose exposure scenarios to the general population), and selection and classification of the point of departure using data from the individual studies (e.g., lack of statistical and clinical significance). Building on sensitivity analyses conducted by the US Environmental Protection Agency in 2012, the resulting estimates of RfD values that account for the uncertainties ranged from ~1.5 to 179 pg/kg/day. It is anticipated that the range of RfDs presented herein, along with the characterization of uncertainties, will improve risk assessments of dioxins and provide important information to risk managers, because reliance on a single toxicity value limits the information needed for making decisions and gives a false sense of precision and accuracy.
Assuntos
Benchmarking/normas , Relação Dose-Resposta a Droga , Poluentes Ambientais/normas , Nível de Efeito Adverso não Observado , Dibenzodioxinas Policloradas/normas , Dibenzodioxinas Policloradas/toxicidade , Medição de Risco/métodos , Humanos , Valores de Referência , Estados UnidosRESUMO
Ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)-propanoate, also known as GenX, is a processing aid used in the manufacture of fluoropolymers. GenX is one of several chemistries developed as an alternative to long-chain poly-fluoroalkyl substances, which tend to have long clearance half-lives and are environmentally persistent. Unlike poly-fluoroalkyl substances, GenX has more rapid clearance, but has been detected in US and international water sources. There are currently no federal drinking water standards for GenX in the USA; therefore, we developed a non-cancer oral reference dose (RfD) for GenX based on available repeated dose studies. The review of the available data indicate that GenX is unlikely to be genotoxic. A combination of traditional frequentist benchmark dose models and Bayesian benchmark dose models were used derive relevant points of departure from mammalian toxicity studies. In addition, deterministic and probabilistic RfD values were developed using available tools and regulatory guidance. The two approaches resulted in a narrow range of RfD values for liver lesions observed in a 2-year bioassay in rats (0.01-0.02 mg/kg/day). The probabilistic approach resulted in the lower, i.e., more conservative RfD. The probabilistic RfD of 0.01 mg/kg/day results in a maximum contaminant level goal of 70 ppb. It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for GenX.
Assuntos
Benchmarking , Água Potável/normas , Hidrocarbonetos Fluorados/toxicidade , Nível de Efeito Adverso não Observado , Propionatos/toxicidade , Padrões de Referência , Poluentes Químicos da Água/toxicidade , Animais , Humanos , Dose Letal Mediana , Modelos Animais , Ratos , Estados UnidosRESUMO
Hexavalent chromium [Cr(VI)] is known to cause lung cancer in workers of certain industries, but an association with stomach cancer is uncertain and widely debated. Systematic review and meta-analyses were conducted to assess the risk of stomach cancer mortality/morbidity in humans and experimental animals exposed to Cr(VI). In accordance with the protocol (PROSPERO #CRD4201605162), searches in PubMed and Embase®, and reviews of secondary literature bibliographies, were used to identify eligible studies. Critical appraisal of internal validity and qualitative integration were carried out using the National Toxicology Program's Office of Health Assessment and Translation (OHAT) approach; meta-analyses were conducted based on the occupational data (the only data suitable for quantitative assessment). Forty-seven publications (3 animal, 44 occupational, 0 non-occupational) met the eligibility criteria. Stomach cancer was only observed in one high risk of bias animal study, and in the low risk of bias studies no stomach cancer was observed. Thus, confidence in this evidence base is high. Environmental epidemiology studies did not meet eligibility criteria because exposure and outcome were not measured at the individual level. Meta-analyses of human data resulted in overall meta relative risks of 1.08 (95% CI: 0.96-1.21) including all studies and 1.03 (95%CI: 0.84-1.26) excluding studies associated with the highest risk of bias. Because most occupational studies have high risk of bias for confounding and exposure domains, the overall confidence in this evidence base is low to moderate. Combining the streams of evidence per the OHAT approach, Cr(VI) does not pose a stomach cancer hazard in humans.