Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574841

RESUMEN

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Asunto(s)
Simulación por Computador , Plaguicidas , Relación Estructura-Actividad Cuantitativa , Pruebas de Toxicidad Aguda , United States Environmental Protection Agency , Animales , Medición de Riesgo , Plaguicidas/toxicidad , Dosificación Letal Mediana , Ratas , Administración Oral , Pruebas de Toxicidad Aguda/métodos , Estados Unidos , Reproducibilidad de los Resultados
2.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378618

RESUMEN

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

3.
Front Pharmacol ; 13: 980747, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36278238

RESUMEN

Current computational technologies hold promise for prioritizing the testing of the thousands of chemicals in commerce. Here, a case study is presented demonstrating comparative risk-prioritization approaches based on the ratio of surrogate hazard and exposure data, called margins of exposure (MoEs). Exposures were estimated using a U.S. EPA's ExpoCast predictive model (SEEM3) results and estimates of bioactivity were predicted using: 1) Oral equivalent doses (OEDs) derived from U.S. EPA's ToxCast high-throughput screening program, together with in vitro to in vivo extrapolation and 2) thresholds of toxicological concern (TTCs) determined using a structure-based decision-tree using the Toxtree open source software. To ground-truth these computational approaches, we compared the MoEs based on predicted noncancer TTC and OED values to those derived using the traditional method of deriving points of departure from no-observed adverse effect levels (NOAELs) from in vivo oral exposures in rodents. TTC-based MoEs were lower than NOAEL-based MoEs for 520 out of 522 (99.6%) compounds in this smaller overlapping dataset, but were relatively well correlated with the same (r 2 = 0.59). TTC-based MoEs were also lower than OED-based MoEs for 590 (83.2%) of the 709 evaluated chemicals, indicating that TTCs may serve as a conservative surrogate in the absence of chemical-specific experimental data. The TTC-based MoE prioritization process was then applied to over 45,000 curated environmental chemical structures as a proof-of-concept for high-throughput prioritization using TTC-based MoEs. This study demonstrates the utility of exploiting existing computational methods at the pre-assessment phase of a tiered risk-based approach to quickly, and conservatively, prioritize thousands of untested chemicals for further study.

4.
Front Pharmacol ; 13: 864742, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35496281

RESUMEN

Regulatory toxicology testing has traditionally relied on in vivo methods to inform decision-making. However, scientific, practical, and ethical considerations have led to an increased interest in the use of in vitro and in silico methods to fill data gaps. While in vitro experiments have the advantage of rapid application across large chemical sets, interpretation of data coming from these non-animal methods can be challenging due to the mechanistic nature of many assays. In vitro to in vivo extrapolation (IVIVE) has emerged as a computational tool to help facilitate this task. Specifically, IVIVE uses physiologically based pharmacokinetic (PBPK) models to estimate tissue-level chemical concentrations based on various dosing parameters. This approach is used to estimate the administered dose needed to achieve in vitro bioactivity concentrations within the body. IVIVE results can be useful to inform on metrics such as margin of exposure or to prioritize potential chemicals of concern, but the PBPK models used in this approach have extensive data requirements. Thus, access to input parameters, as well as the technical requirements of applying and interpreting models, has limited the use of IVIVE as a routine part of in vitro testing. As interest in using non-animal methods for regulatory and research contexts continues to grow, our perspective is that access to computational support tools for PBPK modeling and IVIVE will be essential for facilitating broader application and acceptance of these techniques, as well as for encouraging the most scientifically sound interpretation of in vitro results. We highlight recent developments in two open-access computational support tools for PBPK modeling and IVIVE accessible via the Integrated Chemical Environment (https://ice.ntp.niehs.nih.gov/), demonstrate the types of insights these tools can provide, and discuss how these analyses may inform in vitro-based decision making.

5.
Birth Defects Res ; 114(16): 1037-1055, 2022 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-35532929

RESUMEN

BACKGROUND: The developmental toxicity potential (dTP) concentration from the devTOX quickPredict (devTOXqP ) assay, a metabolomics-based human induced pluripotent stem cell assay, predicts a chemical's developmental toxicity potency. Here, in vitro to in vivo extrapolation (IVIVE) approaches were applied to address whether the devTOXqP assay could quantitatively predict in vivo developmental toxicity lowest effect levels (LELs) for the prototypical teratogen valproic acid (VPA) and a group of structural analogues. METHODS: VPA and a series of structural analogues were tested with the devTOXqP assay to determine dTP concentration and we estimated the equivalent administered doses (EADs) that would lead to plasma concentrations equivalent to the in vitro dTP concentrations. The EADs were compared to the LELs in rat developmental toxicity studies, human clinical doses, and EADs reported using other in vitro assays. To evaluate the impact of different pharmacokinetic (PK) models on IVIVE outcomes, we compared EADs predicted using various open-source and commercially available PK and physiologically based PK (PBPK) models. To evaluate the effect of in vitro kinetics, an equilibrium distribution model was applied to translate dTP concentrations to free medium concentrations before subsequent IVIVE analyses. RESULTS: The EAD estimates for the VPA analogues based on different PK/PBPK models were quantitatively similar to in vivo data from both rats and humans, where available, and the derived rank order of the chemicals was consistent with observed in vivo developmental toxicity. Different models were identified that provided accurate predictions for rat prenatal LELs and conservative estimates of human safe exposure. The impact of in vitro kinetics on EAD estimates is chemical-dependent. EADs from this study were within range of predicted doses from other in vitro and model organism data. CONCLUSIONS: This study highlights the importance of pharmacokinetic considerations when using in vitro assays and demonstrates the utility of the devTOXqP human stem cell-based platform to quantitatively assess a chemical's developmental toxicity potency.


Asunto(s)
Células Madre Pluripotentes Inducidas , Ácido Valproico , Animales , Femenino , Humanos , Embarazo , Ratas , Teratógenos/toxicidad , Ácido Valproico/toxicidad
6.
Toxicol Sci ; 188(1): 34-47, 2022 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-35426934

RESUMEN

Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.


Asunto(s)
Reproducibilidad de los Resultados , Animales , Bases de Datos Factuales , Probabilidad , Ratas , Medición de Riesgo/métodos , Pruebas de Toxicidad Aguda/métodos
9.
Environ Health Perspect ; 129(4): 47013, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33929906

RESUMEN

BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.


Asunto(s)
Agencias Gubernamentales , Animales , Simulación por Computador , Ratas , Pruebas de Toxicidad Aguda , Estados Unidos , United States Environmental Protection Agency
10.
ALTEX ; 38(2): 327-335, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33511999

RESUMEN

Efforts are underway to develop and implement nonanimal approaches which can characterize acute systemic lethality. A workshop was held in October 2019 to discuss developments in the prediction of acute oral lethality for chemicals and mixtures, as well as progress and needs in the understanding and modeling of mechanisms of acute lethality. During the workshop, each speaker led the group through a series of charge questions to determine clear next steps to progress the aims of the workshop. Participants concluded that a variety of approaches will be needed and should be applied in a tiered fashion. Non-testing approaches, including waiving tests, computational models for single chemicals, and calculating the acute lethality of mixtures based on the LD50 values of mixture components, could be used for some assessments now, especially in the very toxic or non-toxic classification ranges. Agencies can develop policies indicating contexts under which mathematical approaches for mixtures assessment are acceptable; to expand applicability, poorly predicted mixtures should be examined to understand discrepancies and adapt the approach. Transparency and an understanding of the variability of in vivo approaches are crucial to facilitate regulatory application of new approaches. In a replacement strategy, mechanistically based in vitro or in silico models will be needed to support non-testing approaches especially for highly acutely toxic chemicals. The workshop discussed approaches that can be used in the immediate or near term for some applications and identified remaining actions needed to implement approaches to fully replace the use of animals for acute systemic toxicity testing.


Asunto(s)
Pruebas de Toxicidad Aguda , Animales , Simulación por Computador , Humanos
11.
Regul Toxicol Pharmacol ; 117: 104764, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32798611

RESUMEN

Screening certain environmental chemicals for their ability to interact with endocrine targets, including the androgen receptor (AR), is an important global concern. We previously developed a model using a battery of eleven in vitro AR assays to predict in vivo AR activity. Here we describe a revised mathematical modeling approach that also incorporates data from newly available assays and demonstrate that subsets of assays can provide close to the same level of predictivity. These subset models are evaluated against the full model using 1820 chemicals, as well as in vitro and in vivo reference chemicals from the literature. Agonist batteries of as few as six assays and antagonist batteries of as few as five assays can yield balanced accuracies of 95% or better relative to the full model. Balanced accuracy for predicting reference chemicals is 100%. An approach is outlined for researchers to develop their own subset batteries to accurately detect AR activity using assays that map to the pathway of key molecular and cellular events involved in chemical-mediated AR activation and transcriptional activity. This work indicates in vitro bioactivity and in silico predictions that map to the AR pathway could be used in an integrated approach to testing and assessment for identifying chemicals that interact directly with the mammalian AR.


Asunto(s)
Antagonistas de Receptores Androgénicos/toxicidad , Andrógenos/toxicidad , Sustancias Peligrosas/toxicidad , Modelos Teóricos , Receptores Androgénicos , Antagonistas de Receptores Androgénicos/metabolismo , Andrógenos/metabolismo , Animales , Exposición a Riesgos Ambientales/prevención & control , Exposición a Riesgos Ambientales/estadística & datos numéricos , Sustancias Peligrosas/metabolismo , Ensayos Analíticos de Alto Rendimiento/métodos , Humanos , Receptores Androgénicos/metabolismo
12.
Toxicol In Vitro ; 67: 104916, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32553663

RESUMEN

Moving toward species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency's httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.


Asunto(s)
Seguridad Química , Medición de Riesgo , Alternativas a las Pruebas en Animales , Animales , Bases de Datos Factuales , Ensayos Analíticos de Alto Rendimiento , Humanos , Pruebas de Toxicidad
13.
Nucleic Acids Res ; 48(W1): W586-W590, 2020 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-32421835

RESUMEN

High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (∼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.


Asunto(s)
Ensayos Analíticos de Alto Rendimiento , Programas Informáticos , Artefactos , Fluorescencia , Internet , Aprendizaje Automático , Preparaciones Farmacéuticas/química , Relación Estructura-Actividad Cuantitativa , Flujo de Trabajo
14.
Sci Rep ; 10(1): 3986, 2020 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-32132587

RESUMEN

The U.S. federal consortium on toxicology in the 21st century (Tox21) produces quantitative, high-throughput screening (HTS) data on thousands of chemicals across a wide range of assays covering critical biological targets and cellular pathways. Many of these assays, and those used in other in vitro screening programs, rely on luciferase and fluorescence-based readouts that can be susceptible to signal interference by certain chemical structures resulting in false positive outcomes. Included in the Tox21 portfolio are assays specifically designed to measure interference in the form of luciferase inhibition and autofluorescence via multiple wavelengths (red, blue, and green) and under various conditions (cell-free and cell-based, two cell types). Out of 8,305 chemicals tested in the Tox21 interference assays, percent actives ranged from 0.5% (red autofluorescence) to 9.9% (luciferase inhibition). Self-organizing maps and hierarchical clustering were used to relate chemical structural clusters to interference activity profiles. Multiple machine learning algorithms were applied to predict assay interference based on molecular descriptors and chemical properties. The best performing predictive models (accuracies of ~80%) have been included in a web-based tool called InterPred that will allow users to predict the likelihood of assay interference for any new chemical structure and thus increase confidence in HTS data by decreasing false positive testing results.


Asunto(s)
Bases de Datos de Compuestos Químicos , Ensayos Analíticos de Alto Rendimiento , Pruebas de Toxicidad , Análisis por Conglomerados , Internet , Relación Estructura-Actividad Cuantitativa
15.
Environ Health Perspect ; 128(2): 27002, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32074470

RESUMEN

BACKGROUND: Endocrine disrupting chemicals (EDCs) are xenobiotics that mimic the interaction of natural hormones and alter synthesis, transport, or metabolic pathways. The prospect of EDCs causing adverse health effects in humans and wildlife has led to the development of scientific and regulatory approaches for evaluating bioactivity. This need is being addressed using high-throughput screening (HTS) in vitro approaches and computational modeling. OBJECTIVES: In support of the Endocrine Disruptor Screening Program, the U.S. Environmental Protection Agency (EPA) led two worldwide consortiums to virtually screen chemicals for their potential estrogenic and androgenic activities. Here, we describe the Collaborative Modeling Project for Androgen Receptor Activity (CoMPARA) efforts, which follows the steps of the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP). METHODS: The CoMPARA list of screened chemicals built on CERAPP's list of 32,464 chemicals to include additional chemicals of interest, as well as simulated ToxCast™ metabolites, totaling 55,450 chemical structures. Computational toxicology scientists from 25 international groups contributed 91 predictive models for binding, agonist, and antagonist activity predictions. Models were underpinned by a common training set of 1,746 chemicals compiled from a combined data set of 11 ToxCast™/Tox21 HTS in vitro assays. RESULTS: The resulting models were evaluated using curated literature data extracted from different sources. To overcome the limitations of single-model approaches, CoMPARA predictions were combined into consensus models that provided averaged predictive accuracy of approximately 80% for the evaluation set. DISCUSSION: The strengths and limitations of the consensus predictions were discussed with example chemicals; then, the models were implemented into the free and open-source OPERA application to enable screening of new chemicals with a defined applicability domain and accuracy assessment. This implementation was used to screen the entire EPA DSSTox database of ∼875,000 chemicals, and their predicted AR activities have been made available on the EPA CompTox Chemicals dashboard and National Toxicology Program's Integrated Chemical Environment. https://doi.org/10.1289/EHP5580.


Asunto(s)
Simulación por Computador , Disruptores Endocrinos , Andrógenos , Bases de Datos Factuales , Ensayos Analíticos de Alto Rendimiento , Humanos , Receptores Androgénicos , Estados Unidos , United States Environmental Protection Agency
16.
Toxicol Appl Pharmacol ; 387: 114774, 2020 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-31783037

RESUMEN

Chemical risk assessment relies on toxicity tests that require significant numbers of animals, time and costs. For the >30,000 chemicals in commerce, the current scale of animal testing is insufficient to address chemical safety concerns as regulatory and product stewardship considerations evolve to require more comprehensive understanding of potential biological effects, conditions of use, and associated exposures. We demonstrate the use of a multi-level new approach methodology (NAMs) strategy for hazard- and risk-based prioritization to reduce animal testing. A Level 1/2 chemical prioritization based on estrogen receptor (ER) activity and metabolic activation using ToxCast data was used to select 112 chemicals for testing in a Level 3 human uterine cell estrogen response assay (IKA assay). The Level 3 data were coupled with quantitative in vitro to in vivo extrapolation (Q-IVIVE) to support bioactivity determination (as a surrogate for hazard) in a tissue-specific context. Assay AC50s and Q-IVIVE were used to estimate human equivalent doses (HEDs), and HEDs were compared to rodent uterotrophic assay in vivo-derived points of departure (PODs). For substances active both in vitro and in vivo, IKA assay-derived HEDs were lower or equivalent to in vivo PODs for 19/23 compounds (83%). Activity exposure relationships were calculated, and the IKA assay was as or more protective of human health than the rodent uterotrophic assay for all IKA-positive compounds. This study demonstrates the utility of biologically relevant fit-for-purpose assays and supports the use of a multi-level strategy for chemical risk assessment.


Asunto(s)
Alternativas al Uso de Animales/métodos , Disruptores Endocrinos/toxicidad , Ensayos Analíticos de Alto Rendimiento/métodos , Pruebas de Toxicidad/métodos , Útero/efectos de los fármacos , Animales , Bioensayo/métodos , Técnicas de Cultivo de Célula , Línea Celular Tumoral , Proliferación Celular/efectos de los fármacos , Simulación por Computador , Estudios de Factibilidad , Femenino , Humanos , Modelos Biológicos , Ratas , Medición de Riesgo/métodos , Útero/citología
17.
Toxicol In Vitro ; 58: 1-12, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30807807

RESUMEN

Because of their broad biological coverage and increasing affordability transcriptomic technologies have increased our ability to evaluate cellular response to chemical stressors, providing a potential means of evaluating chemical response while decreasing dependence on apical endpoints derived from traditional long-term animal studies. It has recently been suggested that dose-response modeling of transcriptomic data may be incorporated into risk assessment frameworks as a means of approximating chemical hazard. However, identification of mode of action from transcriptomics lacks a similar systematic framework. To this end, we developed a web-based interactive browser-MoAviz-that allows visualization of perturbed pathways. We populated this browser with expression data from a large public toxicogenomic database (TG-GATEs). We evaluated the extent to which gene expression changes from in-life exposures could be associated with mode of action by developing a novel similarity index-the Modified Jaccard Index (MJI)-that provides a quantitative description of genomic pathway similarity (rather than gene level comparison). While typical compound-compound similarity is low (median MJI = 0.026), clustering of the TG-GATES compounds identifies groups of similar chemistries. Some clusters aggregated compounds with known similar modes of action, including PPARa agonists (median MJI = 0.315) and NSAIDs (median MJI = 0.322). Analysis of paired in vitro (hepatocyte)-in vivo (liver) experiments revealed systematic patterns in the responses of model systems to chemical stress. Accounting for these model-specific, but chemical-independent, differences improved pathway concordance by 36% between in vivo and in vitro models.


Asunto(s)
Perfilación de la Expresión Génica , Animales , Bases de Datos Factuales , Ontología de Genes , Hepatocitos/metabolismo , Humanos , Medición de Riesgo , Transcriptoma
18.
Toxicol Sci ; 167(2): 484-495, 2019 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-30371864

RESUMEN

The implementation of nonanimal approaches is of particular importance to regulatory agencies for the prediction of potential hazards associated with acute exposures to chemicals. This work was carried out in the framework of an international modeling initiative organized by the Acute Toxicity Workgroup (ATWG) of the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) with the participation of 32 international groups across government, industry, and academia. Our contribution was to develop a multifingerprints similarity approach for predicting five relevant toxicology endpoints related to the acute oral systemic toxicity that are: the median lethal dose (LD50) point prediction, the "nontoxic" (LD50 > 2000 mg/kg) and "very toxic" (LD50<50 mg/kg) binary classification, and the multiclass categorization of chemicals based on the United States Environmental Protection Agency and Globally Harmonized System of Classification and Labeling of Chemicals schemes. Provided by the ICCVAM's ATWG, the training set used to develop the models consisted of 8944 chemicals having high-quality rat acute oral lethality data. The proposed approach integrates the results coming from a similarity search based on 19 different fingerprint definitions to return a consensus prediction value. Moreover, the herein described algorithm is tailored to properly tackling the so-called toxicity cliffs alerting that a large gap in LD50 values exists despite a high structural similarity for a given molecular pair. An external validation set made available by ICCVAM and consisting in 2896 chemicals was employed to further evaluate the selected models. This work returned high-accuracy predictions based on the evaluations conducted by ICCVAM's ATWG.


Asunto(s)
Alternativas a las Pruebas en Animales/legislación & jurisprudencia , Biología Computacional , Sustancias Peligrosas/química , Sustancias Peligrosas/clasificación , Modelos Teóricos , Pruebas de Toxicidad Aguda , Administración Oral , Algoritmos , Biología Computacional/legislación & jurisprudencia , Biología Computacional/métodos , Relación Dosis-Respuesta a Droga , Regulación Gubernamental , Sustancias Peligrosas/administración & dosificación , Dosificación Letal Mediana , Estados Unidos , United States Environmental Protection Agency
19.
Anal Bioanal Chem ; 411(4): 853-866, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30519961

RESUMEN

In August 2015, the US Environmental Protection Agency (EPA) convened a workshop entitled "Advancing non-targeted analyses of xenobiotic chemicals in environmental and biological media." The purpose of the workshop was to bring together the foremost experts in non-targeted analysis (NTA) to discuss the state-of-the-science for generating, interpreting, and exchanging NTA measurement data. During the workshop, participants discussed potential designs for a collaborative project that would use EPA resources, including the ToxCast library of chemical substances, the DSSTox database, and the CompTox Chemicals Dashboard, to evaluate cutting-edge NTA methods. That discussion was the genesis of EPA's Non-Targeted Analysis Collaborative Trial (ENTACT). Nearly 30 laboratories have enrolled in ENTACT and used a variety of chromatography, mass spectrometry, and data processing approaches to characterize ten synthetic chemical mixtures, three standardized media (human serum, house dust, and silicone band) extracts, and thousands of individual substances. Initial results show that nearly all participants have detected and reported more compounds in the mixtures than were intentionally added, with large inter-lab variability in the number of reported compounds. A comparison of gas and liquid chromatography results shows that the majority (45.3%) of correctly identified compounds were detected by only one method and 15.4% of compounds were not identified. Finally, a limited set of true positive identifications indicates substantial differences in observable chemical space when employing disparate separation and ionization techniques as part of NTA workflows. This article describes the genesis of ENTACT, all study methods and materials, and an analysis of results submitted to date. Graphical abstract ᅟ.


Asunto(s)
Conducta Cooperativa , Contaminantes Ambientales/análisis , Proyectos de Investigación , Xenobióticos/análisis , Cromatografía/métodos , Mezclas Complejas , Recolección de Datos , Polvo , Educación , Exposición a Riesgos Ambientales , Contaminantes Ambientales/normas , Contaminantes Ambientales/toxicidad , Humanos , Laboratorios/organización & administración , Espectrometría de Masas/métodos , Control de Calidad , Estándares de Referencia , Suero , Siliconas/química , Estados Unidos , United States Environmental Protection Agency , Xenobióticos/normas , Xenobióticos/toxicidad
20.
Toxicol In Vitro ; 54: 41-57, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30218698

RESUMEN

The ToxCast program has generated in vitro screening data on over a thousand chemicals to assess potential disruption of important biological processes and assist in hazard identification and chemical testing prioritization. Few results have been reported for complex mixtures. To extend these ToxCast efforts to mixtures, we tested extracts from 30 organically grown fruits and vegetables in concentration-response in the BioMAP® assays. BioMAP systems use human primary cells primed with endogenous pathway activators to identify phenotypic perturbations related to proliferation, inflammation, immunomodulation, and tissue remodeling. Clustering of bioactivity profiles revealed separation of these produce extracts and ToxCast chemicals. Produce extracts elicited 87 assay endpoint responses per item compared to 20 per item for ToxCast chemicals. On a molar basis, the produce extracts were 10 to 50-fold less potent and when constrained to the maximum testing concentration of the ToxCast chemicals, the produce extracts did not show activity in as many assay endpoints. Using intake adjusted measures of dose, the bioactivity potential was higher for produce extracts than for agrichemicals, as expected based on the comparatively small amounts of agrichemical residues present on conventionally grown produce. The evaluation of BioMAP readouts and the dose responses for produce extracts showed qualitative and quantitative differences from results with single chemicals, highlighting challenges in the interpretation of bioactivity data and dose-response from complex mixtures.


Asunto(s)
Frutas , Ensayos Analíticos de Alto Rendimiento , Magnoliopsida , Extractos Vegetales/toxicidad , Verduras , Bioensayo , Células Cultivadas , Alimentos Orgánicos , Humanos , Metales Pesados/análisis , Metales Pesados/toxicidad , Micotoxinas/análisis , Micotoxinas/toxicidad , Residuos de Plaguicidas/análisis , Residuos de Plaguicidas/toxicidad , Extractos Vegetales/análisis , Pruebas de Toxicidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...