Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
1.
Vaccine X ; 19: 100503, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38868522

ABSTRACT

Scorpion envenoming (SE) is a public health problem in developing countries. In Algeria, the population exposed to the risk of SE was estimated at 86.45% in 2019. Thus, the development of a vaccine to protect the exposed population against scorpion toxins would be a major advance in the fight against this disease. This work aimed to evaluate the immunoprotective effect of a Multiple Antigenic Peptide against the Aah II toxin of Androctonus australis hector scorpion, the most dangerous scorpion species in Algeria. The immunogen MAP1Aah2 was designed and tested accordingly. This molecule contains a B epitope, derived from Aah II toxin, linked by a spacer to a universal T epitope, derived from the tetanus toxin. The results showed that MAP1Aah2 was non-toxic despite the fact that its sequence was derived from Aah II toxin. The immunoenzymatic assay revealed that the 3 immunization regimens tested generated specific anti-MAP1Aah2 antibodies and cross-reacted with the toxin. Mice immunized with this immunogen were partially protected against mortality caused by challenge doses of 2 and 3 LD50 of the toxin. The survival rate and developed symptoms varied depending on the adjuvant and the challenge dose used. In the in vitro neutralization test, the immune sera of mice having received the immunogen with incomplete Freund's adjuvant neutralized a challenge dose of 2 LD50. Hence, the concept of using peptide dendrimers, based on linear epitopes of scorpion toxins, as immunogens against the parent toxin was established. However, the protective properties of the tested immunogen require further optimizations.

2.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Article in English | MEDLINE | ID: mdl-38574841

ABSTRACT

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Subject(s)
Computer Simulation , Pesticides , Quantitative Structure-Activity Relationship , Toxicity Tests, Acute , United States Environmental Protection Agency , Animals , Risk Assessment , Pesticides/toxicity , Lethal Dose 50 , Rats , Administration, Oral , Toxicity Tests, Acute/methods , United States , Reproducibility of Results
3.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38378618

ABSTRACT

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

4.
Front Pharmacol ; 13: 980747, 2022.
Article in English | MEDLINE | ID: mdl-36278238

ABSTRACT

Current computational technologies hold promise for prioritizing the testing of the thousands of chemicals in commerce. Here, a case study is presented demonstrating comparative risk-prioritization approaches based on the ratio of surrogate hazard and exposure data, called margins of exposure (MoEs). Exposures were estimated using a U.S. EPA's ExpoCast predictive model (SEEM3) results and estimates of bioactivity were predicted using: 1) Oral equivalent doses (OEDs) derived from U.S. EPA's ToxCast high-throughput screening program, together with in vitro to in vivo extrapolation and 2) thresholds of toxicological concern (TTCs) determined using a structure-based decision-tree using the Toxtree open source software. To ground-truth these computational approaches, we compared the MoEs based on predicted noncancer TTC and OED values to those derived using the traditional method of deriving points of departure from no-observed adverse effect levels (NOAELs) from in vivo oral exposures in rodents. TTC-based MoEs were lower than NOAEL-based MoEs for 520 out of 522 (99.6%) compounds in this smaller overlapping dataset, but were relatively well correlated with the same (r 2 = 0.59). TTC-based MoEs were also lower than OED-based MoEs for 590 (83.2%) of the 709 evaluated chemicals, indicating that TTCs may serve as a conservative surrogate in the absence of chemical-specific experimental data. The TTC-based MoE prioritization process was then applied to over 45,000 curated environmental chemical structures as a proof-of-concept for high-throughput prioritization using TTC-based MoEs. This study demonstrates the utility of exploiting existing computational methods at the pre-assessment phase of a tiered risk-based approach to quickly, and conservatively, prioritize thousands of untested chemicals for further study.

5.
Birth Defects Res ; 114(16): 1037-1055, 2022 10 01.
Article in English | MEDLINE | ID: mdl-35532929

ABSTRACT

BACKGROUND: The developmental toxicity potential (dTP) concentration from the devTOX quickPredict (devTOXqP ) assay, a metabolomics-based human induced pluripotent stem cell assay, predicts a chemical's developmental toxicity potency. Here, in vitro to in vivo extrapolation (IVIVE) approaches were applied to address whether the devTOXqP assay could quantitatively predict in vivo developmental toxicity lowest effect levels (LELs) for the prototypical teratogen valproic acid (VPA) and a group of structural analogues. METHODS: VPA and a series of structural analogues were tested with the devTOXqP assay to determine dTP concentration and we estimated the equivalent administered doses (EADs) that would lead to plasma concentrations equivalent to the in vitro dTP concentrations. The EADs were compared to the LELs in rat developmental toxicity studies, human clinical doses, and EADs reported using other in vitro assays. To evaluate the impact of different pharmacokinetic (PK) models on IVIVE outcomes, we compared EADs predicted using various open-source and commercially available PK and physiologically based PK (PBPK) models. To evaluate the effect of in vitro kinetics, an equilibrium distribution model was applied to translate dTP concentrations to free medium concentrations before subsequent IVIVE analyses. RESULTS: The EAD estimates for the VPA analogues based on different PK/PBPK models were quantitatively similar to in vivo data from both rats and humans, where available, and the derived rank order of the chemicals was consistent with observed in vivo developmental toxicity. Different models were identified that provided accurate predictions for rat prenatal LELs and conservative estimates of human safe exposure. The impact of in vitro kinetics on EAD estimates is chemical-dependent. EADs from this study were within range of predicted doses from other in vitro and model organism data. CONCLUSIONS: This study highlights the importance of pharmacokinetic considerations when using in vitro assays and demonstrates the utility of the devTOXqP human stem cell-based platform to quantitatively assess a chemical's developmental toxicity potency.


Subject(s)
Induced Pluripotent Stem Cells , Valproic Acid , Animals , Female , Humans , Pregnancy , Rats , Teratogens/toxicity , Valproic Acid/toxicity
6.
Front Pharmacol ; 13: 864742, 2022.
Article in English | MEDLINE | ID: mdl-35496281

ABSTRACT

Regulatory toxicology testing has traditionally relied on in vivo methods to inform decision-making. However, scientific, practical, and ethical considerations have led to an increased interest in the use of in vitro and in silico methods to fill data gaps. While in vitro experiments have the advantage of rapid application across large chemical sets, interpretation of data coming from these non-animal methods can be challenging due to the mechanistic nature of many assays. In vitro to in vivo extrapolation (IVIVE) has emerged as a computational tool to help facilitate this task. Specifically, IVIVE uses physiologically based pharmacokinetic (PBPK) models to estimate tissue-level chemical concentrations based on various dosing parameters. This approach is used to estimate the administered dose needed to achieve in vitro bioactivity concentrations within the body. IVIVE results can be useful to inform on metrics such as margin of exposure or to prioritize potential chemicals of concern, but the PBPK models used in this approach have extensive data requirements. Thus, access to input parameters, as well as the technical requirements of applying and interpreting models, has limited the use of IVIVE as a routine part of in vitro testing. As interest in using non-animal methods for regulatory and research contexts continues to grow, our perspective is that access to computational support tools for PBPK modeling and IVIVE will be essential for facilitating broader application and acceptance of these techniques, as well as for encouraging the most scientifically sound interpretation of in vitro results. We highlight recent developments in two open-access computational support tools for PBPK modeling and IVIVE accessible via the Integrated Chemical Environment (https://ice.ntp.niehs.nih.gov/), demonstrate the types of insights these tools can provide, and discuss how these analyses may inform in vitro-based decision making.

7.
Toxicol Sci ; 188(1): 34-47, 2022 06 28.
Article in English | MEDLINE | ID: mdl-35426934

ABSTRACT

Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.


Subject(s)
Reproducibility of Results , Animals , Databases, Factual , Probability , Rats , Risk Assessment/methods , Toxicity Tests, Acute/methods
10.
Environ Health Perspect ; 129(4): 47013, 2021 04.
Article in English | MEDLINE | ID: mdl-33929906

ABSTRACT

BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.


Subject(s)
Government Agencies , Animals , Computer Simulation , Rats , Toxicity Tests, Acute , United States , United States Environmental Protection Agency
11.
ALTEX ; 38(2): 327-335, 2021.
Article in English | MEDLINE | ID: mdl-33511999

ABSTRACT

Efforts are underway to develop and implement nonanimal approaches which can characterize acute systemic lethality. A workshop was held in October 2019 to discuss developments in the prediction of acute oral lethality for chemicals and mixtures, as well as progress and needs in the understanding and modeling of mechanisms of acute lethality. During the workshop, each speaker led the group through a series of charge questions to determine clear next steps to progress the aims of the workshop. Participants concluded that a variety of approaches will be needed and should be applied in a tiered fashion. Non-testing approaches, including waiving tests, computational models for single chemicals, and calculating the acute lethality of mixtures based on the LD50 values of mixture components, could be used for some assessments now, especially in the very toxic or non-toxic classification ranges. Agencies can develop policies indicating contexts under which mathematical approaches for mixtures assessment are acceptable; to expand applicability, poorly predicted mixtures should be examined to understand discrepancies and adapt the approach. Transparency and an understanding of the variability of in vivo approaches are crucial to facilitate regulatory application of new approaches. In a replacement strategy, mechanistically based in vitro or in silico models will be needed to support non-testing approaches especially for highly acutely toxic chemicals. The workshop discussed approaches that can be used in the immediate or near term for some applications and identified remaining actions needed to implement approaches to fully replace the use of animals for acute systemic toxicity testing.


Subject(s)
Toxicity Tests, Acute , Animals , Computer Simulation , Humans
12.
Regul Toxicol Pharmacol ; 117: 104764, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32798611

ABSTRACT

Screening certain environmental chemicals for their ability to interact with endocrine targets, including the androgen receptor (AR), is an important global concern. We previously developed a model using a battery of eleven in vitro AR assays to predict in vivo AR activity. Here we describe a revised mathematical modeling approach that also incorporates data from newly available assays and demonstrate that subsets of assays can provide close to the same level of predictivity. These subset models are evaluated against the full model using 1820 chemicals, as well as in vitro and in vivo reference chemicals from the literature. Agonist batteries of as few as six assays and antagonist batteries of as few as five assays can yield balanced accuracies of 95% or better relative to the full model. Balanced accuracy for predicting reference chemicals is 100%. An approach is outlined for researchers to develop their own subset batteries to accurately detect AR activity using assays that map to the pathway of key molecular and cellular events involved in chemical-mediated AR activation and transcriptional activity. This work indicates in vitro bioactivity and in silico predictions that map to the AR pathway could be used in an integrated approach to testing and assessment for identifying chemicals that interact directly with the mammalian AR.


Subject(s)
Androgen Receptor Antagonists/toxicity , Androgens/toxicity , Hazardous Substances/toxicity , Models, Theoretical , Receptors, Androgen , Androgen Receptor Antagonists/metabolism , Androgens/metabolism , Animals , Environmental Exposure/prevention & control , Environmental Exposure/statistics & numerical data , Hazardous Substances/metabolism , High-Throughput Screening Assays/methods , Humans , Receptors, Androgen/metabolism
13.
Toxicol In Vitro ; 67: 104916, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32553663

ABSTRACT

Moving toward species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency's httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.


Subject(s)
Chemical Safety , Risk Assessment , Animal Testing Alternatives , Animals , Databases, Factual , High-Throughput Screening Assays , Humans , Toxicity Tests
14.
Nucleic Acids Res ; 48(W1): W586-W590, 2020 07 02.
Article in English | MEDLINE | ID: mdl-32421835

ABSTRACT

High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (∼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.


Subject(s)
High-Throughput Screening Assays , Software , Artifacts , Fluorescence , Internet , Machine Learning , Pharmaceutical Preparations/chemistry , Quantitative Structure-Activity Relationship , Workflow
15.
Sci Rep ; 10(1): 3986, 2020 03 04.
Article in English | MEDLINE | ID: mdl-32132587

ABSTRACT

The U.S. federal consortium on toxicology in the 21st century (Tox21) produces quantitative, high-throughput screening (HTS) data on thousands of chemicals across a wide range of assays covering critical biological targets and cellular pathways. Many of these assays, and those used in other in vitro screening programs, rely on luciferase and fluorescence-based readouts that can be susceptible to signal interference by certain chemical structures resulting in false positive outcomes. Included in the Tox21 portfolio are assays specifically designed to measure interference in the form of luciferase inhibition and autofluorescence via multiple wavelengths (red, blue, and green) and under various conditions (cell-free and cell-based, two cell types). Out of 8,305 chemicals tested in the Tox21 interference assays, percent actives ranged from 0.5% (red autofluorescence) to 9.9% (luciferase inhibition). Self-organizing maps and hierarchical clustering were used to relate chemical structural clusters to interference activity profiles. Multiple machine learning algorithms were applied to predict assay interference based on molecular descriptors and chemical properties. The best performing predictive models (accuracies of ~80%) have been included in a web-based tool called InterPred that will allow users to predict the likelihood of assay interference for any new chemical structure and thus increase confidence in HTS data by decreasing false positive testing results.


Subject(s)
Databases, Chemical , High-Throughput Screening Assays , Toxicity Tests , Cluster Analysis , Internet , Quantitative Structure-Activity Relationship
16.
Environ Health Perspect ; 128(2): 27002, 2020 02.
Article in English | MEDLINE | ID: mdl-32074470

ABSTRACT

BACKGROUND: Endocrine disrupting chemicals (EDCs) are xenobiotics that mimic the interaction of natural hormones and alter synthesis, transport, or metabolic pathways. The prospect of EDCs causing adverse health effects in humans and wildlife has led to the development of scientific and regulatory approaches for evaluating bioactivity. This need is being addressed using high-throughput screening (HTS) in vitro approaches and computational modeling. OBJECTIVES: In support of the Endocrine Disruptor Screening Program, the U.S. Environmental Protection Agency (EPA) led two worldwide consortiums to virtually screen chemicals for their potential estrogenic and androgenic activities. Here, we describe the Collaborative Modeling Project for Androgen Receptor Activity (CoMPARA) efforts, which follows the steps of the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP). METHODS: The CoMPARA list of screened chemicals built on CERAPP's list of 32,464 chemicals to include additional chemicals of interest, as well as simulated ToxCast™ metabolites, totaling 55,450 chemical structures. Computational toxicology scientists from 25 international groups contributed 91 predictive models for binding, agonist, and antagonist activity predictions. Models were underpinned by a common training set of 1,746 chemicals compiled from a combined data set of 11 ToxCast™/Tox21 HTS in vitro assays. RESULTS: The resulting models were evaluated using curated literature data extracted from different sources. To overcome the limitations of single-model approaches, CoMPARA predictions were combined into consensus models that provided averaged predictive accuracy of approximately 80% for the evaluation set. DISCUSSION: The strengths and limitations of the consensus predictions were discussed with example chemicals; then, the models were implemented into the free and open-source OPERA application to enable screening of new chemicals with a defined applicability domain and accuracy assessment. This implementation was used to screen the entire EPA DSSTox database of ∼875,000 chemicals, and their predicted AR activities have been made available on the EPA CompTox Chemicals dashboard and National Toxicology Program's Integrated Chemical Environment. https://doi.org/10.1289/EHP5580.


Subject(s)
Computer Simulation , Endocrine Disruptors , Androgens , Databases, Factual , High-Throughput Screening Assays , Humans , Receptors, Androgen , United States , United States Environmental Protection Agency
17.
Toxicol Appl Pharmacol ; 387: 114774, 2020 01 15.
Article in English | MEDLINE | ID: mdl-31783037

ABSTRACT

Chemical risk assessment relies on toxicity tests that require significant numbers of animals, time and costs. For the >30,000 chemicals in commerce, the current scale of animal testing is insufficient to address chemical safety concerns as regulatory and product stewardship considerations evolve to require more comprehensive understanding of potential biological effects, conditions of use, and associated exposures. We demonstrate the use of a multi-level new approach methodology (NAMs) strategy for hazard- and risk-based prioritization to reduce animal testing. A Level 1/2 chemical prioritization based on estrogen receptor (ER) activity and metabolic activation using ToxCast data was used to select 112 chemicals for testing in a Level 3 human uterine cell estrogen response assay (IKA assay). The Level 3 data were coupled with quantitative in vitro to in vivo extrapolation (Q-IVIVE) to support bioactivity determination (as a surrogate for hazard) in a tissue-specific context. Assay AC50s and Q-IVIVE were used to estimate human equivalent doses (HEDs), and HEDs were compared to rodent uterotrophic assay in vivo-derived points of departure (PODs). For substances active both in vitro and in vivo, IKA assay-derived HEDs were lower or equivalent to in vivo PODs for 19/23 compounds (83%). Activity exposure relationships were calculated, and the IKA assay was as or more protective of human health than the rodent uterotrophic assay for all IKA-positive compounds. This study demonstrates the utility of biologically relevant fit-for-purpose assays and supports the use of a multi-level strategy for chemical risk assessment.


Subject(s)
Animal Use Alternatives/methods , Endocrine Disruptors/toxicity , High-Throughput Screening Assays/methods , Toxicity Tests/methods , Uterus/drug effects , Animals , Biological Assay/methods , Cell Culture Techniques , Cell Line, Tumor , Cell Proliferation/drug effects , Computer Simulation , Feasibility Studies , Female , Humans , Models, Biological , Rats , Risk Assessment/methods , Uterus/cytology
18.
Toxicol In Vitro ; 58: 1-12, 2019 Aug.
Article in English | MEDLINE | ID: mdl-30807807

ABSTRACT

Because of their broad biological coverage and increasing affordability transcriptomic technologies have increased our ability to evaluate cellular response to chemical stressors, providing a potential means of evaluating chemical response while decreasing dependence on apical endpoints derived from traditional long-term animal studies. It has recently been suggested that dose-response modeling of transcriptomic data may be incorporated into risk assessment frameworks as a means of approximating chemical hazard. However, identification of mode of action from transcriptomics lacks a similar systematic framework. To this end, we developed a web-based interactive browser-MoAviz-that allows visualization of perturbed pathways. We populated this browser with expression data from a large public toxicogenomic database (TG-GATEs). We evaluated the extent to which gene expression changes from in-life exposures could be associated with mode of action by developing a novel similarity index-the Modified Jaccard Index (MJI)-that provides a quantitative description of genomic pathway similarity (rather than gene level comparison). While typical compound-compound similarity is low (median MJI = 0.026), clustering of the TG-GATES compounds identifies groups of similar chemistries. Some clusters aggregated compounds with known similar modes of action, including PPARa agonists (median MJI = 0.315) and NSAIDs (median MJI = 0.322). Analysis of paired in vitro (hepatocyte)-in vivo (liver) experiments revealed systematic patterns in the responses of model systems to chemical stress. Accounting for these model-specific, but chemical-independent, differences improved pathway concordance by 36% between in vivo and in vitro models.


Subject(s)
Gene Expression Profiling , Animals , Databases, Factual , Gene Ontology , Hepatocytes/metabolism , Humans , Risk Assessment , Transcriptome
19.
J Cheminform ; 11(1): 60, 2019 Sep 18.
Article in English | MEDLINE | ID: mdl-33430972

ABSTRACT

BACKGROUND: The logarithmic acid dissociation constant pKa reflects the ionization of a chemical, which affects lipophilicity, solubility, protein binding, and ability to pass through the plasma membrane. Thus, pKa affects chemical absorption, distribution, metabolism, excretion, and toxicity properties. Multiple proprietary software packages exist for the prediction of pKa, but to the best of our knowledge no free and open-source programs exist for this purpose. Using a freely available data set and three machine learning approaches, we developed open-source models for pKa prediction. METHODS: The experimental strongest acidic and strongest basic pKa values in water for 7912 chemicals were obtained from DataWarrior, a freely available software package. Chemical structures were curated and standardized for quantitative structure-activity relationship (QSAR) modeling using KNIME, and a subset comprising 79% of the initial set was used for modeling. To evaluate different approaches to modeling, several datasets were constructed based on different processing of chemical structures with acidic and/or basic pKas. Continuous molecular descriptors, binary fingerprints, and fragment counts were generated using PaDEL, and pKa prediction models were created using three machine learning methods, (1) support vector machines (SVM) combined with k-nearest neighbors (kNN), (2) extreme gradient boosting (XGB) and (3) deep neural networks (DNN). RESULTS: The three methods delivered comparable performances on the training and test sets with a root-mean-squared error (RMSE) around 1.5 and a coefficient of determination (R2) around 0.80. Two commercial pKa predictors from ACD/Labs and ChemAxon were used to benchmark the three best models developed in this work, and performance of our models compared favorably to the commercial products. CONCLUSIONS: This work provides multiple QSAR models to predict the strongest acidic and strongest basic pKas of chemicals, built using publicly available data, and provided as free and open-source software on GitHub.

20.
J Cheminform ; 11(1): 58, 2019 Aug 30.
Article in English | MEDLINE | ID: mdl-33430989

ABSTRACT

The median lethal dose for rodent oral acute toxicity (LD50) is a standard piece of information required to categorize chemicals in terms of the potential hazard posed to human health after acute exposure. The exclusive use of in vivo testing is limited by the time and costs required for performing experiments and by the need to sacrifice a number of animals. (Quantitative) structure-activity relationships [(Q)SAR] proved a valid alternative to reduce and assist in vivo assays for assessing acute toxicological hazard. In the framework of a new international collaborative project, the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods and the U.S. Environmental Protection Agency's National Center for Computational Toxicology compiled a large database of rat acute oral LD50 data, with the aim of supporting the development of new computational models for predicting five regulatory relevant acute toxicity endpoints. In this article, a series of regression and classification computational models were developed by employing different statistical and knowledge-based methodologies. External validation was performed to demonstrate the real-life predictability of models. Integrated modeling was then applied to improve performance of single models. Statistical results confirmed the relevance of developed models in regulatory frameworks, and confirmed the effectiveness of integrated modeling. The best integrated strategies reached RMSEs lower than 0.50 and the best classification models reached balanced accuracies over 0.70 for multi-class and over 0.80 for binary endpoints. Computed predictions will be hosted on the EPA's Chemistry Dashboard and made freely available to the scientific community.

SELECTION OF CITATIONS
SEARCH DETAIL
...